title
string
paper_decision
string
review_1
string
rebuttals_1
string
review_2
string
rebuttals_2
string
review_3
string
rebuttals_3
string
review_4
string
rebuttals_4
string
global_rebuttals
string
dataset_source
string
conference_year
int64
review_5
string
rebuttals_5
string
review_6
string
rebuttals_6
string
review_7
string
rebuttals_7
string
review_8
string
rebuttals_8
string
Diffusion Representation for Asymmetric Kernels via Magnetic Transform
Accept (poster)
Summary: The paper introduces the concept of the magnetic transform to define diffusion representation and diffusion distance for asymmetric kernels. The key idea is to transform an asymmetric kernel into a Hermitian one, enabling the application of standard techniques such as eigen-decomposition. By leveraging this magnetic transform approach, the authors conduct experimental validations to assess the effectiveness of their proposed method. The results demonstrate that their method outperforms other dimension reduction methods in terms of separating clusters in asymmetric data. Strengths: - The presentation of the paper is clear and the paper is easy to follow. - A proper discussion on the selection of the scaling parameter in included which is helpful for people to implement the method. - The experimental results support the effectiveness of the proposed method. Weaknesses: A major concern regarding the paper relates to its novelty and theoretical justification. Several points raise doubts regarding the originality and uniqueness of the proposed approach: - The Magnetic transform, as presented in the paper, may not be considered entirely novel. Previous works, specifically [18] and [19], have already studied similar forms of the Magnetic transform, particularly when the kernel represents the adjacency matrix of a directed graph. This raises questions about the extent to which the proposed approach differs from previous research. While the paper focuses on a different matrix form ($H$ instead of $D-H$ or $I-H$), more theoretical justification is needed to demonstrate the significance and appropriateness of this choice. - Besides, I would like to bring the authors' attention to the following paper: ```VECTOR DIFFUSION MAPS AND THE CONNECTION LAPLACIAN``` by Singer and Wu. It is well known that magnetic laplacian is the same as the connection laplacian when considering $SO(2)$ signatures. In this reference, Singer and Wu have already considered diffusion maps and diffusion distances for connection graphs which could well be a generalization already for what the authors are proposing. I think a comparison with this reference is needed. Additionally, there is a lack of clarity in the experiments regarding the importance of the parameter $t$ and the role of diffusion in the proposed method. The paper does not explicitly demonstrate how the choice of $t$ influences the results or why diffusion is significant. In fact, the current experiments only show that the eigenvectors of the normalized kernel is useful. This ambiguity hampers a thorough understanding of the method and its underlying principles. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: - line 86: why do you choose $t$ to be an integer instead of a real number? - What is the kernel involved in the experiments in Section 5.1? Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your careful reviewing and insightful suggestions. We address your concerns as below: **R3.1 Contributions of (Eq.3).** Thank you for your insightful thoughts on the difference from existing works. The contribution of our work lies in the development of a diffusion representation framework designed for asymmetric kernels. In contrast, existing works [1,2] primarily focus on asymmetric matrices. It is crucial to note that these two approaches are fundamentally distinct: our framework naturally produces a diffusion representation and function decomposition customized to a matrix derived from a kernel function, whereas the reverse may not hold true. This development was facilitated by exploring integral operators of magnetic transform kernels, and Proposition 1 provides the condition for the existence of spectral decomposition. The magnetic transform is designed for asymmetric kernel cases. It is built upon the observation made in our manuscript, supported by Proposition 2, that the eigenvalues of the kernel function H are not uniformly positive, serving as the basis for our design of MagDM. Based on the observation, it is apparent that asymmetric information is embedded not only in positive eigenvalues but also in negative eigenvalues, along with their corresponding eigenfunctions. By incorporating both positive and negative eigenvalues of H, we achieve enhanced robustness and effectiveness. In contrast, [1,2] (I-H) only focused on positive eigenvalues, neglecting the importance of this phenomenon. As a consequence, their approach suffered from limited robustness. The experimental results support the effectiveness of MagDM. **R3.2 The comparison with Singer and Wu's work [3].** We appreciate your insightful comment regarding Singer and Wu's work [3] which indeed provides a valuable foundation in the field. However, it is important to clarify that MagDM and [3] are not generalizations of each other. The proposed methods have three fundamental differences with [3]. Firstly, the addressed problems of them are different. The proposed method addresses the challenge of dealing with asymmetric kernels while [3] focuses on symmetric kernels. Secondly, the unitary transporter defined in (Eq. 3) is utilized for all pairs of samples which reflects asymmetric geometries of the data. In contrast, the SO(2) signatures in [3] are optimized only for nearby samples as best rotational alignments. [3] suggests that these samples, denoted as $x$ and $y$, should satisfy the condition $0 < \\| x - y \\|_{\mathbb{R}^p} < \sqrt{\epsilon}$, where $\epsilon$ is the scaling parameter. And the connection Laplacian is the convergence in the limit of $N \rightarrow \infty$ and $\epsilon \rightarrow 0$ of the Vector Diffusion Map (VDM). In this scenario, SO(2) signatures are optimized only for pairs of samples that are very close to each other in Euclidean space. Thirdly, We employ a complex and unitary transporter in U(1), which is controlled by q. In [3], the SO(2) transporter is determined through local PCA. Additionally, the diffusion distance (Eq.6) in MagDM represents a functional weighted distance between complex-valued proximity of two samples, and the corresponding diffusion map is also complex-valued. In contrast, [3] operates in real-valued working spaces. A similar discussion is also reported in Section 3.3 of the paper [1] which explains the connection between ME and VDM. These distinctions limit the applicability of [3] to the problem addressed in this paper. However, MagDM offers a viable approach to handling asymmetric kernels. We appreciate you highlighting this point, and we will ensure to include these clarifications in our research. **R3.3 The lack of clarity of the parameter in the experiments.** Thank you for your insightful comments. We have conducted additional experiments on MagDM, focusing on diffusion time $t$. The results are presented in Fig. R1 of the attached PDF file under Author Rebuttal. In Fig. R1, we illustrate the clusters that evolve from different values of $t\in\\{1,2,3,4,5,10\\}$. At $t=1$, the local geometry becomes apparent with four clusters visible on the real and imaginary axes. The diffusion distance between samples within the same group is small, while the diffusion distance between samples in different groups is large. As the diffusion process progresses, the diffusion distance between different groups decreases. At $t=4$, the groups start to connect with each other, forming a single structure. Finally, at $t=10$, the four groups merge to create a single super-cluster structure, with very small diffusion distances between points. Interestingly, the clustering on phase axes remains clear and preserved throughout $t$. This observation shows the strong ability of MagDM to capture asymmetric information during the diffusion process. The dynamic behavior we observe underscores the effectiveness of representing and visualizing the data. We will ensure to include the experiments in Appendix of the manuscript due to the space limitation. **R3.4 Discussion about diffusion time $t$.** Thanks for your careful reading. We acknowledge that there is a typo and $t$ is chosen to be a positive real number. We assure you that it will be rectified in the revised version. **R3.5 Kernels in Section 5.1** We utilize the adjacency matrices as Gram matrices of the data. The adjacency matrices are generated by the running flow probability. $K_{ij}=1$ if sample $x_i$ is to connect sample $x_j$ and $K_{ij}=0$ if not. **Referneces** [1] Fanuel M, et al. Magnetic eigenmaps for the visualization of directed networks[J]. ACHA, 2018, 44(1): 189-199. [2] Cloninger A. A note on markov normalized magnetic eigenmaps[J]. ACHA, 2017, 43(2): 370-380. [3] Singer A, Wu H T. Vector diffusion maps and the connection Laplacian[J]. Commun Pure Appl Math, 2012, 65(8): 1067-1144. --- Rebuttal Comment 1.1: Comment: I want to extend my thanks to the authors for addressing my questions. However, I would like to further clarify one of my concerns and pose another question for the authors to elucidate. 1. **On Eigenvalues and Robustness**: Could the authors provide more insights into why incorporating both negative and positive eigenvalues can enhance robustness? Mathematically speaking, the eigenvalues of $H$ and $I-H$ are identical, except for a change of sign and a constant shift. Is there any reference to a related study that can shed light on this aspect? Additionally, the experiments presented don't seem to align with the authors' claim of robustness. The focus on the first two eigenvectors leaves me uncertain about how both positive and negative eigenvalues were utilized. More details on this would be greatly appreciated. 2. **On Connection Laplacian**: I would like to specify my concern regarding the connection Laplacian. I was not referring to the connection Laplacian in the context of Riemannian geometry, but rather to the concept of graph connection Laplacian, where groups are associated with graph edges (see [1,2] for terminology). In Section 3 of [3], the authors introduce a diffusion map defined for general $O(d)$-connection graphs (this is not restricted to the connection graphs generated by local PCA). Since there is an isomorphism between $U(1)$ and $SO(2)$, there appears to be an obvious derivation to a notion of diffusion map for $U(1)$-connection graphs. This is the aspect I would like the authors to address. [1] Fanuel and Bardenet, 2022. Sparsification of the regularized magnetic Laplacian with multi-type spanning forests [2] Bandeira et at. 2013. A Cheeger Inequality for the Graph Connection Laplacian [3] Singer and Wu, 2011. Vector diffusion maps and the connection Laplacian --- Reply to Comment 1.1.1: Comment: I would like to express my sincere gratitude to the reviewer for the valuable feedback and insightful comments. **R3.1 Eigenvalues and Robustness** Thanks for your insightful comment. Indeed, $H$ and $I-H$ can be transferred to each other when considering both positive and negative eigenvalues. However, it should be noted that in [1,2], the negative eigenvalues are truncated, which means that $H$ and $I-H$ are not identical in this regard. We focus on the k eigenvalues that include both positive and negative eigenvalues with the largest absolute magnitudes. These top k eigenvalues correspond to the slowest diffusion processes or the global structures in the data. It is also believed that the larger magnitude of eigenvalues corresponds to better discriminative information, as discussed in [3]. One notable advantage of this approach is its robustness when confronted with perturbed asymmetric similarity. Even if the similarity measure is distorted, we can still obtain the principal components using this method. However, there is no specific reference to a related study because most previous work has mainly focused on positive semi-definite matrices. When dealing with asymmetry, it becomes necessary to consider negative eigenvalues. This aspect is worth exploring in future research. Currently, we can observe this phenomenon experimentally in Fig. 1 of the manuscript. In the case where P=0, it represents a situation where there are only links in one direction, making it an asymmetric but relatively easy scenario. When P=1/0.8, the situation becomes more complicated, and backward flow is perturbed by the forward, MagDM effectively distinguishes between the three groups while ME/MME struggles to capture the information, highlighting the stronger robustness and effectiveness of MagDM. [1] Fanuel M, et al. Magnetic eigenmaps for the visualization of directed networks[J]. ACHA, 2018, 44(1): 189-199. [2] Cloninger A. A note on Markov normalized magnetic eigenmaps[J]. ACHA, 2017, 43(2): 370-380. [3] Hamm J, Lee D D. Grassmann discriminant analysis: a unifying view on subspace-based learning[C]. ICML. 2008: 376-383. **R3.2 Discussion about Connection Laplacian** Thank you for kindly specify the concern. The MagDM is not a simple derivation of diffusion map for $U(1)$-connection graphs because there are three key differences between MagDM and the graph connection Laplacian (GCL) approach. 1. Definition of the similarity matrices. MagDM focuses on asymmetric similarity between samples and calculates the $U(1)$ transporter for all pairs of samples. On the other hand, GCL focuses on symmetric cases and calculates the orthogonal $O(d)$ transporter for graph edges. While there is an isomorphism between $U(1)$ and $SO(2)$, they differ in implementation. MagDM measures scalar similarity between two samples, denoted as x and y, while the similarity between x and y in GCL is a $d\*d$ block. The corresponding Gram matrix in MagDM, denoted by $\rho$, has a size of $N\*N$, while the corresponding Gram matrix in GCL, denoted by $S$, is $Nd \* Nd$. 2. Definition of diffusion distance. In MagDM, the diffusion distance is defined as $[D^t(x,y)]^2=\int_X(\rho^t(x,u)-\rho^t(y,u))^2\mu(du)$. It measures the number of paths of length t connecting x and y in the complex-valued plane. The diffusion distance is calculated by the sum of squares of the difference between $\rho^t\left(x,\bullet\right)$ and $\rho^t\left(y,\bullet\right)$. In GCL, diffusion distance has a different form: $[D^t(x,y)]^2=Tr(S^t(x,y)S^t(x,y)^\top)$, which measures the agreement between transporters and the number of paths of length t connecting x and y. Based on these two definitions, it can be observed that $U(1)$-connection graphs exhibit different diffusion distances in MagDM and GCL. 3. Definition of the diffusion map. Due to the difference in diffusion distances, the diffusion maps of the two methods also differ. In MagDM, eigenvalues and eigenvectors are defined as $\\{\\lambda_i\\}\_{i=1}^N,\\{\\phi_i\\}\_{i=1}^N$, respectively. The diffusion map is defined as $\Phi^t(x)=(\lambda_1^t\phi_1\left(x\right),\lambda_2^t\phi_2\left(x\right),\cdots,\lambda_N^t\phi_N(x))$. For vector diffusion map, its eigenvalues and eigenvectors are defined as $\\{η_i\\}\_{i=1}^{Nd},\\{ψ_i\\}\_{i=1}^{Nd}$, respectively. Its diffusion map is defined by $\Phi^t(x)=((η_iη_j)^t < ψ_i(x),ψ_j(x) > )_{i,j=1}^{Nd}$. These two diffusion maps have different forms from each other. Overall, MagDM is not a derivation to a notion of diffusion map for $U(1)$-connection graphs.
Summary: This paper studies the asymmetric kernel case for the diffusion map. The authors utilize the magnetic transform technique to develop a diffusion representation framework for the asymmetric kernel case. They investigate several properties of the proposed magnetic transform kernel. The challenge lies in defining the corresponding integral operators and diffusion geometry. In their experiments, the authors show their proposed MagDM framework competitive and even superior to existing dimension reduction techniques like DM, KPCA, etc. Strengths: This work combine the magnetic transform and the diffusion map techniques to handle the asymmetric kernel case of the diffusion map. It seems to be a pioneer research work by defining new concepts. Weaknesses: All the experiments present only qualitative results and some quantitative results would help figure out the advantages of the proposed MagDM framework over other existing techniques. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: I am not familiar with the topic and feel quite confused about Proposition 1: why do we want to assume a Hilbert-Schmidt kernel $X$ and define another Hilbert-Schmidt kernel $H^{(q)}$? The goal seems to define a kernel with conjugate symmetry. Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: The authors have mentioned some limitations of the proposed MagDM framework in appendix. For example, different choices of asymmetric kernel functions would impact the performance of the framework. Currently, there is no clue about how to handle it. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for appreciating the novelty of our work and for providing insightful comments. We address your concerns as below: **R2.1 Some quantitative results would help figure out the advantages of the proposed method.** We sincerely appreciate your insightful comments. In response, we have conducted the quantitative experiments as suggested. For further details, please refer to **R0.1** in the Author Rebuttal. **R2.2 The goal of defining another Hilbert-Schmidt kernel.** Defining the Hilbert-Schmidt kernel is a preparatory step for MagDM. The goal is to define the Hermitian kernel function whose spectral decomposition is utilized to support the proposed diffusion maps (Eq.8). The existence of the spectral decomposition requires the corresponding kernels to be both conjugate symmetric and Hilbert-Schmidt kernels. Therefore, it is necessary to define the Hilbert-Schmidt kernel $H^{(q)}$ based on Proposition 1. --- Rebuttal Comment 1.1: Comment: Thank you for your clarification and I will keep my score as 7. --- Reply to Comment 1.1.1: Comment: We deeply appreciate your valuable feedback and insightful comments. We would keep improving our manuscript to involve your insights.
Summary: The paper proposes proposes a new method called MagDM in which it connects Diffusion maps (DM) and the Magnetic transform (MT). DM is a nonlinear dimension reduction technique obtains a lower dimensional embedding by using the information of the diffusion distances that assume symmetry. However, in practice the intrinsic and extrinsic geometries of data can be asymmetric. MT is a promising technique that converts an asymmetric matrix to a Hermitian one, make it suitable for working with asymmetry, but the connection between DM and MT haven’t been explored much. Hence, the main contribution of the paper is to make such connections between DM and MT. Specifically, they developed a diffusion framework endowed with asymmetric kernels using MT. Also, the integral operators of MT, whose compactness is established if the asymmetric kernels are Hilbert-Schmidt kernels have been explored. They’ve also proven an important property that the spectral radius of the proposed integral operator is 1, which ensures that MagDM will not diverge during the diffusion process. Strengths: The ideas presented here are original. The paper is generally well-written. There are numerous qualitative experiments. Since asymmetric data is common, the proposed approach could have a high significance and impact and could be useful in other methods that use the diffusion framework. Weaknesses: The paper is missing quantitative experiments. While the qualitative experiments in the paper do suggest that MagDM is capturing the asymmetric geometry better than other approaches, the scatter plots are difficult to interpret without expert knowledge of the underlying data. It would be better to have some numerical evaluation of this. I recommend the authors come up with some metric that measures this performance for comparing the different methods. Other comments The diffusion framework could be clarified further. For example, what is the geometric interpretation of the diffusion distances and how do they relate to $\rho(x,y)$? Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. Can the authors use quantitative measures of how well the methods perform? 2. In Figure 1, it is shown that MagDM has better results compared to ME & MME when P has large values (greater than 0.5). On the other hand, when P is less than 0.5, specifically when P = 0, P = 0.2, all three methods are doing well to separate the groups. So if P = 0 and P = 0.2 are also considered to have high level of asymmetry information, how does MagDM outperform others? 3. In Figure 3, from the perspective of dimension reduction, I would like to compare the result to the original structure . Since the original structure is connected for a M¨obius strip, I would expect the lower dimensional embedding to be connected in a loop as well. However you’ve mentioned in line 210 that you focus on the asymmetric geometry of the data but I am not sure how does the ”asymmetric structure” look like in the original space colored by the rainbow map. So I think you should add the additional subplots to your figures to show us how the original data looks like. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 4 excellent Limitations: Addressed Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your insightful comments and appreciation of the novelty. We address your concerns as below: **R1.1 Quantitative experiments are needed.** We sincerely appreciate your insightful comments. In response, we have conducted the quantitative experiments as suggested. For further details, please refer to **R0.1** in the Author Rebuttal. **R1.2 Discussions of the diffusion framework.** The MagDM framework, proposed for asymmetric scenarios, employs a mathematical technique to transform a pair of real-valued asymmetric similarities into complex-valued and conjugate symmetric similarities. By leveraging the concept of random walks on graphs, the framework computes the diffusion distance $D^t(x,y)$. This distance is a functional weighted $L^2$ distance between complex-valued proximities, enabling a more flexible and adaptive measure of similarity. $D^t(x,y)$ will be small if two samples are similar in the complex-valued plane and vice versa. Consequently, it becomes capable of capturing both local and global patterns within the data. In this framework, the diffusion kernel, denoted as $\rho(x,y)$, represents the transition probabilities of the random walks. It signifies the likelihood of transitioning from one data point to another within the network. In summary, the proposed diffusion distance within the MagDM framework facilitates a geometric interpretation of asymmetric similarity. It encompasses the network structure and takes into account the asymmetric relationships between data points. We will ensure to include these clarifications in our research. **R1.3 How does MagDM outperform others in Figure 1?** Thank you for your insightful observations. Both ME and MME are designed to extract asymmetric geometric information. When P=0, it represents a situation where there are only links in one direction, making it an asymmetric but relatively easy scenario. When P=1/0.8, the situation becomes more complicated, and ME/MME may struggle to capture the information. In contrast, MagDM effectively distinguishes between the three groups in both scenarios, highlighting its stronger robustness and effectiveness. The reason could be intuitively explained as follows: ME/MME focuses solely on the positive part of the eigenvalues of the Hermitian kernel function (Eq.3) but as Proposition 2 demonstrates that the eigenvalues are not necessarily all positive. In the proposed method, regardless of their sign, we take into account all the eigenvalues and the advantages are more obvious for hard tasks, i.e., P=1/0.8. **R1.4 Results about the M\"obius strip** Your intuition and suggestions are appreciated. We put the original structure of the M\"obius strip in Appendix F.3 of the supplementary material, which is indeed a loop as you expected. In the main body, we demonstrate the result in another view: color is drifted in counterclockwise direction on the x-y plane from $\angle xy=0$ to $\angle xy=2\pi$, so the embedding performance is more likely a line. In the final version, we will add the figures in the main body. --- Rebuttal Comment 1.1: Comment: I have read through the reviews and the authors' responses. They have addressed my concerns and I have raised my score accordingly. --- Reply to Comment 1.1.1: Comment: We are very grateful for your re-considering our work. We would keep improving our manuscript to involve your insights.
null
null
Rebuttal 1: Rebuttal: Dear Program Chairs, Area Chairs, and Reviewers, First of all, we would like to thank you for your time, constructive critiques, and valuable suggestions, which greatly help us improve the work. We are grateful to reviewers SsqM and sG91 for recognizing the novelty and significance of our work. They acknowledge its potential to have a high impact in addressing asymmetric kernel cases. And we also grateful that reviewers unanimously agree that our qualitative experimental results support the effectiveness of the proposed method. We will address the concern about quantitative experiments as below: **R0.1 Reviewers SsqM and sG91 think quantitative experiments are needed.** Thanks for your insightful suggestions. We agree with you that quantitative results can better show the advantages. However, for unsupervised learning, this is always a problem. And thus, specifically to the dimension reduction, there is no well-accepted quantitative criteria. So, the recent papers [1,2] also lack quantitative experiments. Following your suggestions, we have employed four evaluation metrics commonly used for clustering. Given that the original data is clustered in a high-dimensional space, our expectation is that the embedded data will also exhibit clustering patterns after dimension reduction. We have incorporated quantitative experiments to measure the performance of the proposed method (MagDM) in comparison to two other methods (ME and MME). In the first network, we set the forward flow probability to 0.5 and the backward flow probability to P=0/0.2/1. We then cluster the low-dimensional embeddings of the three methods using the k-means algorithm with k=3. To evaluate performance for the clustering results, we consider two internal evaluation metrics: the silhouette coefficient (SC) and the Davies-Bouldin index (DB), as well as two external evaluation metrics: the adjusted Rand index (ARI) and the normalized mutual information (NMI). These metrics assess the quality of clustering. If you are interesting about these indexes, we are willing to discuss with you. The results of the quantitative experiments evaluating the performance of the proposed MagDM are presented in Table 1 of the attached PDF file. It can be seen that MagDM achieves higher scores in the metrics. Particularly when P=1, where interconnections are more complicated, MagDM significantly outperforms other methods, which highlights the superior effectiveness and robustness of MagDM compared to other methods. We will ensure to include the experiments in Appendix of the manuscript due to the space limitation. We sincerely look forward to further discussions with the reviewers. Best wishes, Anonymous author(s) of Paper6885 **References** [1] Fanuel M, Alaíz C M, Fernández Á, et al. Magnetic eigenmaps for the visualization of directed networks[J]. Applied and Computational Harmonic Analysis, 2018, 44(1): 189-199. [2] Gomez A A, Neto A J S, Zubelli J P. Diffusion representation for asymmetric kernels[J]. Applied Numerical Mathematics, 2021, 166: 208-226. Pdf: /pdf/dbe8602e7cea52de25f91fef422da463d8d89185.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Learning threshold neurons via edge of stability
Accept (poster)
Summary: The paper is a study on the dynamics of neural network training, particularly focusing on the edge of stability. The authors explore the behavior of gradient descent in the context of a simple sparse coding model. They find that the dynamics of training exhibit a phase transition at the edge of stability, where the learning dynamics switch from a regime of stable convergence to one of chaotic oscillations. The authors also demonstrate that the edge of stability is a critical point where the learned representations transition from being dense to sparse. This transition is shown to be sharp and is characterized by a power-law behavior. The authors further show that these findings are indicative of behaviors observed in practical neural network training settings, including a two-layer ReLU network trained on the full sparse coding model with an unknown basis, as well as a deep neural network trained on CIFAR-10. Strengths: The paper addresses a complex problem in the field of neural network learning, focusing on the edge of stability. The authors' exploration of the phase transition in learning dynamics at the edge of stability, and the transition from dense to sparse representations, appears to be a novel contribution to the field. The quality of the paper is good. The authors have conducted a thorough investigation into the dynamics of neural network training. Their findings are well-supported by their data and analysis, indicating a high level of methodological soundness. The presentation of the paper is also good. The authors have clearly communicated their research, making it easy for readers to follow their methodology, findings, and conclusions. The paper appears to be well-structured and well-written. Weaknesses: The authors assert that a large learning rate is necessary to learn the bias. This assertion holds true in practical scenarios, such as training ResNet-18 on binary CIFAR-10 using various learning rates. However, I am curious about the potential effects of freezing the bias during training. In an experiment I conducted, I found that freezing the bias did not significantly impact the results. I would appreciate it if the authors could investigate this aspect further and discuss its implications. This work investigates a simplistic neural network model that consists of only three parameters. While this provides a clear and controlled environment for study, it represents a significantly pared-down model when compared to the complex, over-parameterized models commonly used in deep learning. The insights gained from this study may not fully capture the intricacies and nuances of real-world deep learning applications. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: In lines 31-32, the authors posit that an effective network maintains a negative bias $b$ to filter out noise, and ensures that $a^-$ is less than 0 and $a^+$ is greater than 0 to output correct labels. I would appreciate if the authors could elaborate on this assertion. From my perspective, if the noise is of opposite sign to the label (for instance, $y_i = 1$ when the output of $x[i]$ is less than 0), assigning a negative value to the bias $b$ may not be beneficial, as both ReLU terms would return 0 in this scenario. Could the authors clarify this point? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your review. We address some of your points below. - Regarding freezing the biases: we acknowledge that even with frozen biases, it is possible for neural networks to succeed at learning binary CIFAR-10. The full story of generalization in deep learning is complex and we do not claim to fully explain it in our paper. Our scope is more limited in scope, as we aim to identify one possible mechanism in which optimization impacts generalization; nevertheless, we see that the phenomenon we observe is indeed borne out in some practical training examples. Although your question is interesting and important, it is beyond the scope of our work. - We agree that the insights we reveal are still far from the complex intricacies of training bigger networks. However, our insights carry over to more general models, for example see the interesting recent work [1]. [1] Song and Yun ``Trajectory Alignment: Understanding the Edge of Stability Phenomenon via Bifurcation Theory’’ (https://arxiv.org/abs/2307.04204) - Finally, regarding your last question: we consider the regime of sparse coding in which the noise level is not too high (so that the problem is still statistically solvable); in particular, we expect that for most of the examples, the label is of the same sign as the signal coordinate (by definition of the model). In this case, the architecture that we proposed will correctly threshold out the noise. Please let us know if you have further questions regarding this point. --- Rebuttal Comment 1.1: Comment: Thank you for your response and for addressing some of my concerns. However, I would like to reiterate my point regarding the role of biases in your study. While I understand that the scope of your paper is limited and that the full story of generalization in deep learning is complex, the emphasis on biases in your theory and experiments is significant. Your paper claims that biases play an important role in the dynamics of neural network training, and you present experiments to verify this theory. My concern is that the assertion regarding biases may be misleading. You acknowledge that freezing the biases did not significantly impact the results. This suggests that the role of biases may not be as critical as your paper implies. While I acknowledge the insights you have provided and the references to recent work, I believe that a more thorough examination of the role of biases, both theoretically and experimentally, would strengthen your paper. I appreciate your engagement with my review, and I hope you will consider these points as you continue to refine your work. My overall assessment and rating remain unchanged. --- Reply to Comment 1.1.1: Title: Thank you for your feedback! Comment: Thank you for your feedback. We acknowledge your point on the role of biases but the goal of our work was not to elucidate the EoS in all settings but to clearly exhibit this phenomenon and its implications in one concrete setting. Since the mechanism for learning with frozen biases seems to be completely different from the bias learning problem, the setting you raise is out of scope of the current paper. Meanwhile, the setting we do consider is often encountered in practice. With that said, it seems unreasonable to expect us to develop a complete theory for such diverse scenarios — could you provide further clarification on your expectations?
Summary: The authors analyze the dynamics of pairs of ReLu neurons with an input bias, no weight matrix, and a readout layer. They show that it takes large learning rate for non-zero biases to be learned, and at these large learning rates there are formal guarantees of EOS behavior as well. They use a model with random weights and shared biases to show that the EOS induces a phase transition in the behavior of the biases. Strengths: The model is straightforward, and the analysis of the EOS seems correct. The phase transition in the bias is well established by this work and is an interesting phenomenon. Weaknesses: Though this work claims to study a realistic model, the one-neuron model is still very much a toy model. Many other toy models have already been studied in recent works, and it is unclear what the analysis of this model in particular adds. In addition, many networks are trained without the bias term, so there may be limited utility for theory which focuses mainly on that term. Additional suggested references: [EOS in a quadratic regression model with formal convergence guarantees](https://arxiv.org/abs/2210.04860) [EOS and weight magnitude in quadratic expansion of ReLu network](https://arxiv.org/abs/2205.11787) Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: Is the bias learning phase transition a good way to describe single hidden layer ReLu models with multiple neurons? Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 4 excellent Presentation: 3 good Contribution: 1 poor Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the review, and for the additional references; we will cite them. We address your points below. - Firstly, regarding the advantage of our toy model analysis over prior works: this is a fair criticism. We argue that compared to prior models, our model and analysis is the simplest demonstration of EoS, which is important for intuition. Due to this simplicity, we are able to obtain refined results on how the shape of the loss affects the limiting sharpness and convergence time (see Figure 9). - Furthermore, we note that the simplicity of our approach inspired a follow-up work (Song and Yun [1]) where the authors extend our main results to more general neural networks. [1] Song and Yun ``Trajectory Alignment: Understanding the Edge of Stability Phenomenon via Bifurcation Theory’’ (https://arxiv.org/abs/2307.04204) - In addition to the simplicity of our model, another one of our contributions is to clearly link the phenomenon we study with practical implications for learning ``threshold neurons.’’ - Secondly, regarding your question on whether the bias learning describes the behavior of ReLU networks with multiple neurons: in Appendix A.1, we perform experiments for the full sparse coding model with a ReLU network with multiple neurons and we observe qualitatively similar behavior, indicating that our findings do generalize beyond the toy model (see also our experiments for ResNet18 in Appendix A.2). Thank you once again for taking the time to review our paper. Given that you have not raised any concerns regarding the correctness or novelty of our work, if you believe that we have addressed your points (in particular, that we provide conceptual and generalizable insights on the relationship of neural network optimization and generalization), we would appreciate it if you would consider raising your score. --- Rebuttal Comment 1.1: Title: Response to authors Comment: Regarding simplicity of the model: it is still not clear if "simplest" model displaying EOS is the best in this case, if the mechanism does not shed light on how EOS occurs in more practical models. I'm also not sure what to take away from the ResNet 18 experiments; I can see that there are oscillations, but it's not clear that the mechanism from the paper is at play here. I also don't quite understand why the average value matters for downstream tasks. For this reason, I will keep my review score; I thank the authors for their discussion.
Summary: In this paper, the authors studied the problem of edge of stability (EoS) phenomena (training with large learning rate) in simple settings. Specifically, the authors first studied the problem of minimizing $\ell((xy)$ ($\ell$ is loss function) and showed that under certain condition of loss function, gradient descent are either in the gradient flow regime or EoS regime depending on the initialization, which means different stepsize could lead to final solution with different maximum eigenvalue of Hessian of loss. Then, to study the problem of learning threshold neurons, they instead considered a simplified problem which they called mean model. They proved that under this mean model, larger stepsize could decrease the bias and therefore learn the neuron, while bias remains small under small stepsize. Empirical experiments showed that the training dynamics of this mean model and the actual training is similar. Strengths: 1. The paper is clearly written and easy-to-follow. The proof sketch and dynamics are given to help the readers to understand the proof easier. Experiments are provided to support the results. 2. Understanding the effect of large learning rate in deep learning training (specifically EoS phenomenon) is an important problem and may give more insight into the practice of deep learning training. 3. This paper provided both theoretical and empirical results on the simple model that learning threshold neurons. The theoretical results, though do not directly answer the question of learning threshold neurons, give some possible insights and seem to be interesting. Weaknesses: 1. The current paper focuses on the simple setting of minimizing $\ell(xy)$ as well as a mean model introduced to approximate learning threshold neuron. It would be interesting to extend the analysis to broader settings (e.g., real dynamics of learning threshold neuron and sparse coding model). 2. The results in the EoS regime has an additional assumption on the dynamic, though it is understandable from the technical perspective. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. I was wondering what the convergence time for are both gradient flow regime and EoS regime. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: The limitations are clearly discussed in the paper. This is a theoretical work and therefore does not seem to have negative societal impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your review. We address some of your points below. - We agree that extending our analysis to more general settings is of great interest, although it’s beyond the scope of this work. In fact, a recent work [1] was able to prove similar results to ours for more general settings of neural networks, building on the analytical techniques developed in this work. [1] Song and Yun ``Trajectory Alignment: Understanding the Edge of Stability Phenomenon via Bifurcation Theory’’ (https://arxiv.org/abs/2307.04204) - Regarding the convergence time in the gradient flow and EoS regimes, please see the paragraph titled “Convergence rate estimates” in Appendix B, as well as Figure 9. In particular, we show that the iteration complexity transitions from $\eta^{-1}$ in the gradient flow regime, to $\eta^{-\min(\beta/(\beta-1), 2}$ ($\log(1/\eta)/\eta^2$ when $\beta = 2$) in the EoS regime, and that these convergence rates agree well with experiments (Figure 9). Unfortunately, we were not able to include this in the main text due to space constraints, but we will try to include a pointer. We hope that this addresses your question. --- Rebuttal Comment 1.1: Comment: Thanks for the response. I will keep my score.
Summary: The paper studies two simple problems (single-neuron linear network and mean ReLU model for sparse coding with unknown basis) to investigate learning of threshold neurons. In particular, the authors discover a threshold for the learning rate below which threshold neurons are not learned and where NN training follows the gradient flow. For larger learning rates, training trajectories are oscillating (edge of stability) and threshold neurons are learned, which is essential for solving the sparse coding problem. Strengths: The paper is extremely well written and provides a gentle introduction to concepts such as sparse coding, edge of stability, and threshold neurons, which makes it accessible to readers not familiar with these terms. The results in Sections 2 and 3 seem rigorous and are well presented. Particularly Section 1.2 is an excellent summary of the results that follow and helps to put them into context. The theoretical results seem to agree with the experimental evidence. The fact that the considered examples are simple is not a drawback, as this is expected for works that, for the first time, study a certain phenomenon from the theoretical perspective. Nevertheless, I appreciate the authors effort to back their claims and show the generality of the phenomenon via the experiments on CIFAR-10 in Fig. 3. Weaknesses: None, actually -- but see the questions below. EDIT: Lowered score since practical implications of results are not fully clear (cf. frozen bias). If I have to mention something, then it is the fact that Fig. 1 appears slightly out of context and could be shifted to the second page. Furthermore, there are a few typos that can be removed (139 convergenec, 262 in turns out, 268 from which can more accurately, 289 wiht). Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: Some questions came up when reading the paper. Taking them into account could further improve the readability of the manuscript: - In Fig. 1, why is the bias initially at -0.2? Was this a randomly chosen initial bias? - What is the definition of "sharpness", as discussed in Sec. 2.2? - It would be good to place the mathematical expression for the gradient flow in Sec. 2, maybe close to the GD iterates in line 177. - It is not clear, from the main part of the paper, why the step size for the $A$-dynamic is multiplied by $2d^2$ (e.g., in (5)). It would be good to explain that briefly in the main part of the paper. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 4 excellent Contribution: 4 excellent Limitations: The authors have listed several interesting future avenues of research. The limitations are evident from the simple setup. There are no negative societal impacts foreseen. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your generous review and we are glad that you enjoyed our paper. We will follow your suggestion to move Figure 1 to page 2 and we will correct the typos that you mentioned. Below, we answer your questions: - In Fig. 1, the bias was indeed initialized at 0. Note that Figure 1 plots the final bias after training. Over a long time scale, the bias of the ReLU model does become slightly negative, even for small step sizes (around -0.2 in the plot); we believe this is because the GD dynamics for the real network (as opposed to the mean model approximation) is noisier. However, it is remarkable that the real network still exhibits a sharp phase transition. We will include an explanation of this point in the paper. - The sharpness is defined as the largest eigenvalue of the Hessian at the current iterate. We will clarify this in the document. - We will make this change. - We will include a brief explanation of this point in the main text.
null
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Learning Space-Time Continuous Latent Neural PDEs from Partially Observed States
Accept (poster)
Summary: The authors present a grid-independent generative model to learn PDEs, which supports irregular grids. Amortized variational inference is employed for posterior approximation and multiple shooting, a recent method to train neural ODEs efficiently, is used and adapted to PDEs. Results show a quite significant dominance over two SOTA PDE learning methods. Strengths: - The paper is well written and very well presented. The motivation is clear, the model formalism is well presented, the general flow is good. I enjoyed reading the paper. - The figures are well made and appreciated to have an intuition about the different elements. - The contribution of the paper is the model itself, which appears to be a natural extension to the Iakovlev et al., 2023, adpated for a PDE instead of an ODE setting. It is therefore not the most original methodologically, but a completely proper and natural contribution. Note that even though grid-independence and irregular grid prediction capabilities have been studied before for PDE learning, (notably by the SOTA the method is comparing to) which again restrains the novelty of the paper, the present methodology seems to significantly outshine the previous methods, which therefore justifies the contribution. Weaknesses: - Relating to the previous comment regarding the contribution on the strength section: because it is not a new gap that the method is filling, but rather an improvement over SOTA methods, the contribution of the paper, besides being an extension of the multiple shooting for Neural ODE learning, relies on the performance in order to be strong. Therefore, it would be very useful to have more extensive experiments: - of SOTA methods on finer and coarser grids, in order to test super resolution capabilities of the method compared to the other SOTA methods. An experiment is done on Figure 8, but not compared to SOTA methods - to test prediction on long future horizon for all models including SOTA - trying multiple grid sizes when training, instead of only multiple number of trajectories, as shown in Figure 10 - It seems that it lacks some important details - In the experiments regarding time specification, specifically the time horizon and the time resolution. It is mentioned in the appendix that "25 time points" are used for the two first data and 20 "time points" for the scalar flow, but it is not clear to me what they represent in real time. Maybe I missed it somewhere? If not, please mention it, as this is important when comparing to SOTA methods, as each method seems to have its strength, regarding spatial or time capabilities. - In the figures: please add the time resolution on Figure 6 and 9 (how much is each time step?), and the horizon predicted on all the figures, to know what the MAE correspond to! If it is the same every time, please state it clearly somewhere (again, I apologize if I missed it...!) - How is the generative aspect of the method used? The fact that it is probabilistic is partly what makes it interesting as opposed to SOTA models (MAgNet and DINo, which are not generative), so it is a pity that no probabilistic outputs are used or shown in the results, but rather mean predictions. I think in general, the authors could move a bit of the formalism of the model to the appendix (which is also taken from Iakovlev et al., 2023 anyway, so that makes it easier), to let some space for more experiments and more details on the experiments, since, once again, they are in my view key to the contribution of the pure performance of the model! Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: Please refer to the weaknesses part for recommendations or modifications. Regarding other small questions or comments: - A general figure to describe the overall method would be very nice to have! Given, that the method is quite complex and gathers many different elements, it would really help grasping the whole framework, at least at a high level. - For the spatial aggregation, you use an MLP, like for example in MAgNet. The latter actually shows in the paper, in an ablation study, that the use of the MLP interpolator, is actually not so beneficial with respect to a cubic interpolator. Have you tried other interpolators, perhaps in an ablation study? Since this is the key to the grid-independent capabilities of the model, I think it is worth a part in the paper. - I think the C.4 appendix section on Forecasting should be put in the main text as it is quite crucial to the method. - I realized from the appendix training details that the batch size is 1. Was the data so big that it was impossible to fit a higher batch size in memory? Didn't it create training stability issues? - I think it is a bit unncessary to split the contributions in 3 the way you do it in the introduction : generative model, PDE model, encoder. To my understanding, the model you are proposing is exactly the mix of the 3, so I think it is better to simply do this one strong claim: you designed one PDE learning model, with all its characteristics. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: The authors addressed some limitations about the scalability of the method, but not much else; perhaps there are other points, like the training speed or the lack of special geometries taken into account within the method, or super-resolution capabilities still to be compared to SOTA? I do not see any potential negative societal impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your detailed review. We address the raised concerns below. **Additional experiments** Thank you for your suggestions. We agree that more experiments are required to fully demonstrate the advantages of our method. We conducted multiple additional experiments and present the results below. First, we extend our comparisons to a larger number of relevant methods. We added three more methods, a simple baseline NeuralODE [1], and two SOTA methods as suggested by the reviewers: Neural Stochastic PDEs (NSPDE) [2], and Finite Element Networks (FEN) [3]. Please see the global rebuttal for description of the setup and results. Second, to test the performance of our model on predicting over longer time horizons than the ones seen during training, we trained our model only on the first half of the training time grid (i.e., from 0 to 0.5T, where T is the observed time-horizon), and tested it on the full time grid (of length T). The test error for shallow water and scalar flow did not increase, while for Navier-Stokes it increased almost twice. This observation can be understood by the fact that velocity of the species transported in the Navier-Stokes dataset keeps increasing over time. Thus, the system behavior that the model was trained on (before time 0.5T), is different from the system behavior on which the model was tested (after time 0.5T), naturally leading to higher prediction errors. Unfortunately, due to time constraints we were not able to conduct the time horizon experiments for other methods. But we are working on these experiments and will add them to the updated version of our manuscript. Comparison of the super-resolution performance to other methods was not done due to poor performance of other methods even on the original training grids. However, this is a good suggestion and we will also conduct this experiment and add it to the updated version of our manuscript. **Time grids** Indeed, mentioning only the number of time points is not clear enough. For that reason we also provide information about the time horizons (T) in Appendix C.1. For Shallow Water T=0.1sec, for Navier-Stokes T=2sec, while for Scalar Flow T=2.5sec (we did not mention T for scalar flow, but we will add it in the revision). The system evolution over the mentioned time horizons is depicted in Fig. 6. As can be seen, the state of all systems changes a lot during these time intervals. For better intuition, we will visualize the time grids and add the figures into the revised version of our manuscript. Thank you also for your suggestion for visualizing the forecasting intervals. We will rework the figures to include this suggestion. In our last experiment "Comparison to other models" we discuss the forecasting time intervals, but we agree that this detail should be discussed at the beginning of the Experiments section. We will modify our manuscript accordingly. **How is the generative aspect of the method used?** As you correctly noted, we use the mean of the posterior predictive distribution as the prediction. As we discuss in Appendix C.4, the mean is computed by averaging over the predictions corresponding to different samples from the approximate posterior distribution. This is a common way in which uncertainty is incorporated into the model prediction. **Paper changes** Thank you for your suggestions about improving our work! We will incorporate them in the revision by moving some of the multiple shooting formalism to the appendix and moving the appendix on forecasting to the main text. We will also work on providing a figure that gives a better overall idea about our method. **Using MLP for spatial aggregations** As you correctly noted, we use an MLP as the spatial aggregation function. However, since the spatial aggregation is a part of the encoder, it is used only to infer the latent states, not for interpolation. For interpolation we use standard interpolation methods such as linear interpolation. In addition to the linear interpolation, we investigated the performance of other interpolation methods: k-NN and inverse distance weighting. Please see the results and description of the experiment in the global rebuttal. **Using small batch sizes** Indeed, the batch size was set to 1 for all experiments. The amount of GPU memory used during training was around 4-6GB, so there was a possibility to increase the batch size. However, we did not face any issues with convergence, so we decided to leave it at 1. **References** [1] Neural Ordinary Differential Equations, https://arxiv.org/abs/1806.07366 [2] Neural Stochastic PDEs: Resolution-Invariant Learning of Continuous Spatiotemporal Dynamics, https://arxiv.org/abs/2110.10249 [3] Learning the Dynamics of Physical Systems from Sparse Observations with Finite Element Networks, https://openreview.net/forum?id=HFmAukZ-k-2 --- Rebuttal Comment 1.1: Title: Answer to rebuttal Comment: Thanks for answering to all my comments. I understand all answers and agree with most of them. Just one thing: I am not sure I understand what you mean by your answer about the use of the generative aspect of the method to extract uncertainty outputs. I did understand that you use the mean of the posterior distrib as the prediction, but I was suggesting that you could use the different samples from the posterior to extract a distrib and therefore some uncertainty instead of simply using the mean. Could you elaborate a bit more on why you do not do so? Otherwise, I appreciate your efforts regarding the extra experiments on long horizon and super resolution, on top of the ones for extra models, and I am more than willing to even increase my score to 7 should you perform them in the next revision. If you further improve the structure and the flow of the paper, which is, and I agree with the other reviewers on this, a little dense and hard to read at times, it will be a good contribution to the conference, definitely publishable in my opinion. --- Reply to Comment 1.1.1: Comment: Thank you for your positive comments. We address the raised concerns below. Indeed, the way our model makes predictions is similar to what you described. Our proposed method uses variational inference to estimate posteriors of model parameters (both the dynamics model and decoder) and latent variables (initial states). We then take n samples from the posterior (over the initial state and model parameters), generate n predictions, and then average them to obtain the final prediction which we use to compute the test errors and for comparison with other models. In probabilistic modeling literature that corresponds to the standard way of estimating the posterior mean prediction. As you correctly pointed out, the estimated posterior can also be used to compute the (full) predictive density over the test data points and thus allows computing probabilistic test error metrics, such as the expected log predictive density. However, previous methods can only make point predictions and thus we need to resort to posterior mean predictions to make meaningful comparisons. We agree that visualizing the full posterior predictive distributions will be interesting and can provide additional insight. We will include such demonstrations in the next revision. We also kindly note that making a revision is possible only after the discussion period, hence all additions (e.g. extra experiments on long horizon and super resolution) and improvements to the flow of the paper will be added after the discussion period is finished.
Summary: The authors proposed a new method in learning PDEs by combining interpolation and NODEs. The difference from NODEs is that the authors uses spatial derivatives in hidden dynamics as PDE, compute the loss between shooting gaps with KL divergence, and compute the initial conditions with a transformer. The authors then support their claim by experimenting with three datasets, shallow water, navier-stokes, and scalar flow. Strengths: - The authors present a combination of various methods as a new solution to learning PDEs. - The performance of the method, on the particular datasets the authors choose, are quite well. Weaknesses: Overall, the proposed method might be nice, but the paper itself is very hard to read. - In section 3.2 generative modeling, there are two paragraphs discussing multiple shooting. There is no discussion on how multiple shooting is used on generative modeling. - There are too few captions for the figures. In figure 9, which parts are the time you interpolate? Which parts are extrapolation / forecasting? - The notations $q_\psi(\theta)$ is so confusing. Technical Quality: 3 good Clarity: 1 poor Questions for Authors: - In fig. 4 minimizing gaps, which norm do you use? Do you instead maximize the probability / likelihood in equation 17? In some commonly used PDEs, such as wave equations, derivatives of the PDE also matters. Did you take them into account? Or does your method only work on specific type of PDEs? - The experimental results in the paper is very different from the experimental results in DINO paper. Do you use a different experimental setup? - Are figure 8 and figure 9 partially the same? - What is a true latent dimension in line 225? - What are $h$ in figure 5? Are they intemediate states? But in equation 22 they look like functions. Are they the same $h$? - Also in figure 5, did you add extra observations around $x_j$? Or are they within the dataset? Did you assume all observations are on the same position but not grid-like? Then why don't we use NODEs with finite element meshes? - What types of problems are these experiments? Why are they interesting? Are they experiments from the SOTA / benchmark you compare with, or are they associated with any practical problems in application? - For those synthetic experiments, what are the noise level? What is the hidden equation? Navier-stokes might be self evident, but what about shallow water? - Do the models rely on initialization? How much runs did you run your experiments? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 1 poor Contribution: 2 fair Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your detailed review. We address the raised concerns below. **Using multiple shooting in generative modeling, and gap minimization.** Indeed, the first two paragraphs of Section 3.2 only motivate the use of multiple shooting, but the rest of the section goes into details about how it is applied in our model. See especially Eq. 14-17 which describe our generative model and the role of multiple shooting in it. To reduce the gaps we minimize the KL-divergence between the approximate posteriors of the shooting states and continuity prior (see part (iii) of the ELBO, Sec. 4.1). Note that the gap minimization via the KL-divergence is obtained directly from the variational lower bound (see the ELBO derivation in Appendix B.1) because the multiple shooting is formulated as part of our proposed generative model. During minimization we take the whole latent state into account, which means our method is not restricted to specific types of PDEs. **Do the models rely on initialization? How much runs did you run your experiments?** As for any other deep-learning-based model, appropriate initialization helps our model to achieve better performance. As we discuss in Appendix A.2, we use the standard Xavier initialization. Also, as we mention in the experiments section, the results are the average over 4 runs with different random seeds. Overall, we observed that the performance of our method is very stable wrt to the random seed changes. **In Figure 9, which parts are interpolate/forecasting? Are Figures 8 and 9 partially the same?** In Figure 9 we show only forecasting. We will clarify this point in the revision. Figures 8 and 9 both show predictions of our model, but the results in Figures 8 and 9 are for two different experiments. **What is a true latent dimension in line 225?** True latent state dimension refers to the dimension of the full system state, which is 3 for both shallow water and Navier-Stokes equations. For example, for the shallow water equation the state consists of the wave height (scalar) and velocity vector (2-dimensional vector). We will also clarify this point in the revision. **What are h in figure 5?** As we discuss at the end of Section 4.2, h are functions that are parts of our encoder for amortized variational inference. **In figure 5, did you add extra observations around xj?** No, we do not add any extra observations. We assume that we have observations only at the observation locations. As we discuss in Sec. 4.2, we use interpolation to obtain the system state outside of the observation locations (i.e., the locations marked with crosses around xj in figure 5). This is the core idea that is used both in the encoder and dynamics function which enables to make our proposed model continuous in space and grid-independent. We will clarify this aspect in the revised manuscript. **What types of problems are these experiments? Why are they interesting? Are they associated with any practical problems in application?** As we discuss in Sections 1, 2, and 5, our experimental setup is highly-relevant for real-world applications, especially where the observations might be collected over irregular spatiotemporal grid, and the observed states might be incomplete. As our results (and extra comparisons show in the global rebuttal) demonstrate, this is a setting where currently available models fail, while our model demonstrates strong performance highlighting its utility in such challenging scenarios. **For those synthetic experiments, what are the noise level? What is the hidden equation?** Data used for experiments in the main section is noiseless. However, in Appendix E we provide comprehensive results for noisy data with different levels of noise. As can be seen, our model is robust and maintains strong performance even under significant noise levels. Also, we provide all details about the systems used for our experiments in Appendix C, including the hidden equations. We will add this detail to the main text. **The experimental results in the paper is very different from the experimental results in DINO paper. Do you use a different experimental setup?** Our setup is more challenging than the one used in the DINo paper. They use fully-observed states and 512 training trajectories. In contrast, we use partially observed states and only 60 trajectories. To ensure fair comparisons, we conducted extensive hyperparameter tuning for DINo (as well as for all other baselines comparisons) using the default model parameters in the DINo paper as the starting point. --- Rebuttal Comment 1.1: Comment: The authors provide reasonable explanation on most of my concerns. I am increasing my score accordingly. --- Reply to Comment 1.1.1: Comment: Thank you for your positive comment.
Summary: This work proposes a latent variable model for PDEs with an encoder-decoder architecture that evolves the latent variables in time with a neural ODE. The model achieves independence of the locations of the evaluation points by linearly interpolating the data onto points distributed in a fixed pattern around each evaluation point. Strengths: - The paper's main contribution is the grid-independence of the model by interpolating observations and latent states onto fixed neighboring points around each observation point - The experimental section investigates the effect of the latent state dimension and the size of the training dataset Weaknesses: - The collocation method is a function space method that finds solutions that solve the PDE at a given set of collocation points. I don't see how this is used in this paper. The authors refer to the general form of first-order-in-time PDE in Eq (4) as the collocation method, but Eq (4) is just a general form of a PDE and not a method. - The method of lines is only about eliminating spatial derivatives by discretizing the solution for each $t$ in a function space and does not contain any notion of data-driven models or being restricted to evaluations of $z(t, x)$ as claimed in lines 88-89 - Generative modeling of PDE solutions (section 3.2) is neither motivated nor evaluated in experiments - Use of bayesian deep learning (variational inference of model parameters, section 3.2) is not motivated - Overall, the approach is quite similar to the one presented in (Lienen and Günnemann, 2022) in terms of method-of-lines and interpolation of data, though the authors only mention but don't compare to them. In contrast to the author's writing in their related work section, I would even see this as the most closely related work. - Application of method of lines is not novel as claimed in line 37-38, see (Iakovlev et al., 2020) and (Lienen and Günnemann, 2022) - This paper does not actually propose/apply a collocation method as claimed in line 37-38 - As a result, novelty and significance of this work are limited (Iakovlev et al., 2020): https://arxiv.org/abs/2006.08956 (Lienen and Günnemann, 2022): https://arxiv.org/abs/2203.08852 Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: - What are the advantages of modeling PDEs in a latent space instead of data space? Do you have any experimental insight into this question? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: The authors did not discuss any limitations of their model. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your detailed review. We address the raised concerns below. **Motivating the use of generative modeling and Bayesian inference** We agree that we can improve our presentation regarding these points. Generative modeling is a standard approach in state space modeling which allows to explicitly define the parameter, process, and observation models, allowing to make explicit modeling assumptions, see e.g., "Statistics for Spatio-Temporal Data" by by N. Cressie. The ability to incorporate uncertainty in the predictions is valuable when there is limited amounts of data and in challenging real-world scenarios, where it allows to account for inherent system stochasticity and model uncertainty. Please note that we use the mean of the posterior predictive distribution as our prediction, which implies that we fully utilize the Bayesian nature of our method. We will add these clarifications to the revision. **Comparison to other methods and to Lienen and Günnemann, 2022** Indeed, the method by Lienen and Günnemann, 2022 is related to ours as it also models data on irregular spatiotemporal grids. However, as we show in the new set of experiments (see the global rebuttal), it fails in more realistic and challenging setting that we consider in our work. We agree that a more detailed comparison to a larger number of relevant methods is required. We added three more methods, a simple baseline NeuralODE [1], and two SOTA methods as suggested by the reviewers: Neural Stochastic PDEs (NSPDE) [2], and Finite Element Networks (FEN) from Lienen and Günnemann, 2022 [3]. Please see the global rebuttal for description of the setup and results **Collocation method and the method of lines** We agree that our method does not utilize the classical form of the collocation method. But please note that our method closely follows the collocation method, and then modifies it using the method of lines. In more detail, we represent z(t, x) as an interpolant (Eq. 3) and substitute it in the PDE (Eq. 4), which is then represented as a system of ODEs (Eq. 8), where each ODE corresponds to an observation location. This is very similar to how the collocation method is applied to time-dependent PDE problems. However, instead of using the collocation method directly, we combine it with the method of lines. While application of the method of lines is not novel, its combination with the collocation method, to the best of our knowledge, is novel and, as we discuss in lines 80-92, leads to multiple advantages such as grid-independence. We agree that the method of lines is not constrained to a particular type of spatial grids, but the way it was used in previous works makes their models grid-dependent. The main reason for grid-dependence, as discussed in lines 96-107, is reliance on the locations of the grid nodes, which our proposed methods avoids. **Questions** **Q1:** What are the advantages of modeling PDEs in a latent space instead of data space? Do you have any experimental insight into this question? **A1:** This is an important question as this is one of the main contributions of our work. We demonstrate the advantages of our latent PDE model in Fig. 7, where we compare a data-space variant of our model (latent state dimension d=1), and a latent space variant of our model (d > 1). As can be seen, modeling in the latent space results in drastic improvements in predictive performance. Setting d > 1 allows the encoder to infer the missing features of the state variable, which in turn allows the dynamics function to model the system more accurately. In other words, modeling a system in the data space sets a fundamental limitation to the model and its accuracy if the system states are only partially observed, such as in the Scalar Flow as well as in many real-world datasets. **References** [1] Neural Ordinary Differential Equations, https://arxiv.org/abs/1806.07366 [2] Neural Stochastic PDEs: Resolution-Invariant Learning of Continuous Spatiotemporal Dynamics, https://arxiv.org/abs/2110.10249 [3] Learning the Dynamics of Physical Systems from Sparse Observations with Finite Element Networks, https://openreview.net/forum?id=HFmAukZ-k-2 --- Rebuttal Comment 1.1: Comment: Thank you for the clarifications. I will remain with my assessment. --- Reply to Comment 1.1.1: Comment: Thank you for your comment. Please let us know if we can clarify any remaining concerns.
Summary: This paper introduces a new method for learning time-dependent PDE solutions with noisy, partially-observed data on irregular grids. This setting is quite challenging and is aligned with real-world data acquisition. They adopt a generative framework and combines two techniques for solving PDEs: the collocation method, which is used to approximate spatial derivatives, and the method of lines to propagate the PDE solution forward in time. At the core of their architecture lies a spatio-temporal encoder, that aggregates for every spatial/ temporal coordinate pair $(x_j, t)$ information over the spatial neighborhood $\mathcal{N}_S(x_j)$ and the temporal neighborhood $\mathcal{N}_T(t)$. They tackle the bayesian problem with a variational formulation, and learn the overall model by maximizing an ELBO objective. They use sparse bayesian multiple shooting to stabilize and accelerate the training. They validate their framework on three different datasets: Shallow-Water (SW), Navier-Stokes (NS) and Scalar Flow (SF). Strengths: This is overall a good technical contribution. The paper shows solid experimental results, outperforming DINo and MAgNet on the three different datasets. The idea of using a generative model for solving PDEs is interesting, and the combination of all the different blocks is far from trivial to train. The explanation of the architecture is clear, though the notations are sometimes a bit hard to follow. The authors also evaluate the capabilities of the model to adapt to coarser and finer grids to support their claims. Weaknesses: * The paper is quite difficult to read overall. The introduction and related work are very brief, especially for such a difficult topic. It would help the reader to add some background from the PDE Deep Learning literature in those sections. For instance, there is no mention of Neural Operators which has been a popular topic recently, and there is no clear explanation of the limitations of existing state-of-the-art methods. __N.B.__: DINo which tackles a very similar problem (except for noiseless data), is not cited in the introduction nor the related work but is still used as the main baseline. __N.B.2__: The Section 3.2 with the explanation of the multiple shooting framework is a bit confusing and does not add much in terms of explanation of the model. On the other hand, inference is only exposed in the Appendix and should be in the core of the paper. * Though the idea of spatiotemporal encoding is sound, its realization does not appear to be very elegant and some limitations that could contradict the original claim of spatial continuity come to mind. What if the interpolation in the spatial neighborhood is not applicable ? Let's say you have an obstacle in the domain or you want to know the solution at the boundaries of your system, how can this work ? In all the figures the frames seem to contain only the convex hull of the set of points. This is worrying in terms of possible application to domains with complex geometries which are of main interest for people working in computational dynamics. * The results shown in Figure 10 are very impressive, especially for Navier-Stokes (NS) and Shallow-Water (SW). However I wonder, how can the model generalize this well on new test initial conditions with only 2 training trajectories? In terms of MAE, the method is already at 0.03 and 0.07 with two training samples for SW and NS while it reaches 0.015 and 0.041 with 60 samples. Do you have a possible explanation for this phenomenon ? * I understand that the main motivation of the paper is to develop a framework for partially-observed, noisy, irregularly-spaced observations, and therefore there are not a lot of suitable candidate baselines. Still, I think it would help readers understand the difficulty of the tasks to have a comparison with more classical neural operators and pde solvers on regular settings. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: * Did you try interpolant functions other than linear interpolation ? Similarly, have you tried the model with spherical data to see how the interpolation on a sphere behaves ? * How does the model scale with the number of grid points ? ~1100 points for a 2D domain does not appear to be of really high-resolution. Is the method still applicable for instance with standard grids used in the community such as 64x64 or 64x128 points ? * Is the method able to capture high-frequency patterns that occur in turbulent flows ? * Is it possible that the proposed model overfitted the training horizon and as a result obtained better results than DINo and MAgNet ? This would be in line with the training-size analysis. How does the model extrapolate over the training horizon ? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: There is no limitation section. The authors discuss briefly the difficulties for the method to scale with the numbers of points. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your detailed review. We address the raised concerns below. **Paper structure** Thank you for your suggestions regarding the structure of our manuscript. We will improve our work by incorporating them into the revision. We will move details about forecasting to the main text and will extend the introduction and related work to include neural operators and other relevant models. We will also elaborate on the limitations of existing SOTA methods. **Comparison to other methods** We agree that a more detailed comparison to a larger number of relevant methods is required. We added three more methods, a simple baseline NeuralODE [1], and two SOTA methods as suggested by the reviewers: Neural Stochastic PDEs (NSPDE) [2], and Finite Element Networks (FEN) [3]. Please see the global rebuttal for description of the setup and results. **Applications of our method on non-convex spatial domains** Our method can be applied on non-convex spatial domains without any modifications. Existence of obstacles is not a problem as long as the interpolation method is applicable (which is the case for many interpolation techniques such as k-NN, IDW, linear interpolation, piece-wise polynomial interpolation). The state of the system can be obtained at any point within the convex hull of the observation locations. One should expect that the number and positions of the observation locations are appropriate for the problem at hand. In that case it would be possible to obtain the state sufficiently close to the boundaries. **Data efficiency.** To achieve outstanding data-efficiency shown in Fig. 10 we utilize spatiotemporal locality of dynamical systems. Namely, we use the fact that derivatives in a PDE are defined locally (hence we need only local information to define the time rate of change), and assume that the latent state depends only on a local spatiotemporal neighborhood (as discussed in Section 4.2). These ideas are reflected in the design of our model which operates on local spatial and temporal neighborhoods instead of working on the whole grid, such as DINo for example. **Questions** **Q1:** Spherical grids and other interpolation methods. **A1:** Spherical grids are out of the scope of our work, but our method can potentially be applied to such data assuming that an appropriate interpolation method for spheres is used and that the spatial neighborhoods are appropriately defined on the sphere. We also investigated other interpolation techniques. We added two new interpolation methods: k-NN and inverse distance weighting, and tested them on spatial grids with different resolution. Please see the global rebuttal for description of the setup and results. **Q2:** Is the method able to capture high-frequency patterns that occur in turbulent flows ? **A2:** We did not consider that particular question, but as our experimental results suggest, our model is capable of learning complex flow-based phenomena. **Q3:** How does the model scale with the number of grid points ? **A3:** Since our model operates on each grid point, it scales linearly with the number of grid points. For reference, our model occupies around 6GB on GPU during training for shallow water dataset. **Q4:** How does the model extrapolate over the training horizon ? **A4:** To test this, we trained our model only on the first half of the training time grid (i.e., from 0 to 0.5T, where T is the observed time-horizon), and tested it on the full time grid (of length T). The test error for shallow water and scalar flow did not increase, while for Navier-Stokes it increased almost twice. This observation can be understood by the fact that velocity of the species transported in the Navier-Stokes dataset keeps increasing over time. Thus, the system behavior that the model was trained on (before time 0.5T), is different from the system behavior on which the model was tested (after time 0.5T), naturally leading to higher prediction errors. **References** [1] Neural Ordinary Differential Equations, https://arxiv.org/abs/1806.07366 [2] Neural Stochastic PDEs: Resolution-Invariant Learning of Continuous Spatiotemporal Dynamics, https://arxiv.org/abs/2110.10249 [3] Learning the Dynamics of Physical Systems from Sparse Observations with Finite Element Networks, https://openreview.net/forum?id=HFmAukZ-k-2 --- Rebuttal Comment 1.1: Title: Answer to Rebuttal Comment: > **Paper Structure** Thank you for clarifying the method's structure. > **Comparison to Other Methods** Appreciate the included baselines. While not precisely what I requested, they still enhance the experimental evaluation. I wanted to highlight the underlying complexity of the PDE without noise + irregular grid. Comparing against FNO or DeepOnet on regular grids could add value to your work. > **Applications on Non-Convex Spatial Domains** Regarding the use of an interpolant function to query the spatial neighborhood $\mathcal{N}_s$, could there be issues if neighbor points fall outside the domain due to obstacles? > **Data Efficiency** I understand the point about local modeling's inductive bias aiding close-to-true PDE learning. Then, is this bias from architecture components alone, or does the Bayesian framework also contribute? Could the same results be achieved without variational training? > **A1:** Thank you for incorporating additional interpolation methods. > **A2:** Noted. > **A3:** Could you specify the batch size that was used ? > **A4:** The inclusion of temporal extrapolation evaluation is appreciated. Overall, most of my concerns are addressed, and I'm inclined to raise my score to 6. --- Reply to Comment 1.1.1: Comment: Thank you for your positive comments. We address the raised concerns below. **Comparison to Other Methods** We forgot to mention that NSPDE is an operator-based method designed for irregular grids, so it is a more natural choice for our setup than e.g., FNO. Sorry for confusion! **Applications on Non-Convex Spatial Domains** This is a good point. We actually faced this problem with the Scalar Flow datasets. We found that marking the "out of domain" nodes by setting their value to -1 was sufficient (the observations were between 0 and 1). We missed this details in our manuscript, so we will add it to the revision. **Data Efficiency** The local inductive bias is only due to the design of the model components (encoder and dynamics function). One could use point estimation instead of Bayesian inference and still enjoy the data-efficiency aspect of our model. **Could you specify the batch size that was used?** The batch size was 1.
Rebuttal 1: Rebuttal: We would like to thank all reviewers for their careful reading and detailed comments. We believe that the suggested revisions have improved our manuscript. We believe our answers address all review comments, but if anything remains unclear we are happy to provide further clarifications. We provided a response for each reviewer individually, and use this section to provide details and results of the extra experiments that the reviewers requested. **Comparison to other models** We agree that a more detailed comparison to a larger number of relevant methods is required. We added three more methods, a simple baseline NeuralODE [1], and two SOTA methods as suggested by the reviewers: Neural Stochastic PDEs (NSPDE) [2], and Finite Element Networks (FEN) [3]. We describe the setup and results below. Setup: In addition to our partially-observed datasets, we created fully-observed versions of the synthetic datasets, where the whole system state is observed. This is a simplified setup where most models show good results. In the table below we show test MAE of all the methods. | Model | Shallow Water (Full) | Shallow Water (Partial) | Navier Stokes (Full) | Navier Stokes (Partial) | Scalar Flow (Real-World) | |:------|:--------------------:|:-----------------------:|:--------------------:|:-----------------------:|:------------------------:| | NODE | $0.036 \pm 0.000$ | $0.084 \pm 0.001$ | $0.053 \pm 0.001$ | $0.109 \pm 0.001$ | $0.056 \pm 0.001$ | | FEN | $\boldsymbol{0.011 \pm 0.002}$ | $0.064 \pm 0.005$ | $0.031 \pm 0.001$ | $0.108 \pm 0.002$ | $0.062 \pm 0.005$ | | NSPDE | $0.019 \pm 0.002$ | $0.033 \pm 0.001$ | $0.042 \pm 0.004$ | $0.075 \pm 0.002$ | $0.059 \pm 0.002$ | | DINo | $0.027 \pm 0.001$ | $0.063 \pm 0.003$ | $0.047 \pm 0.001$ | $0.113 \pm 0.002$ | $0.059 \pm 0.001$ | | MAgNet | NA | $0.061 \pm 0.001$ | NA | $0.103 \pm 0.003$ | $0.056 \pm 0.003$ | | Ours | $0.014 \pm 0.002$ | $\boldsymbol{0.016 \pm 0.002}$ | $\boldsymbol{0.024 \pm 0.003}$ | $\boldsymbol{0.041 \pm 0.003}$ | $\boldsymbol{0.042 \pm 0.001}$ | We see that some of the baseline models achieve reasonably good results on the fully-observed datasets, but they fail on partially-observed data, while our model maintains strong performance in all cases. Apart from the fully observed shallow water dataset where FEN performs slightly better than ours, our proposed method outperforms other methods on all other datasets by a clear margin. We will add these results into the revised manuscript. **Investigating other interpolation methods** We investigated other interpolation techniques and describe the setup and results below. Setup: We used spatial grids with three different resolutions (Coarser, Original, and Finer). We trained our model only on the Original grid, and tested it on all grids to test how well it generalizes to grids with lower/higher resolution. We added two new interpolation methods: k-NN and inverse distance weighting. The results (test MAE) are shown in the table below. | Dataset | Grid | k-NN | Linear | IDW | |:-------------:|:--------:|:-----------------:|:-----------------:|:------------------:| | | Coarser | $0.046 \pm 0.002$ | $0.034 \pm 0.001$ | $ 0.038 \pm 0.002$ | | Shallow Water | Original | $0.017 \pm 0.002$ | $0.016 \pm 0.002$ | $0.017 \pm 0.003$ | | | Finer | $0.031 \pm 0.003$ | $0.017 \pm 0.003$ | $0.030 \pm 0.002$ | | | Coarser | $0.087 \pm 0.006$ | $0.069 \pm 0.009$ | $0.066 \pm 0.006$ | | Navier Stokes | Original | $0.048 \pm 0.009$ | $0.041 \pm 0.003$ | $0.045 \pm 0.010$ | | | Finer | $0.054 \pm 0.009$ | $0.044 \pm 0.004$ | $0.049 \pm 0.002$ | | | Coarser | $0.041 \pm 0.021$ | $0.032 \pm 0.009$ | $0.035 \pm 0.012$ | | Scalar Flow | Original | $0.019 \pm 0.001$ | $0.018 \pm 0.000$ | $0.018 \pm 0.001$ | | | Finer | $0.040 \pm 0.016$ | $0.026 \pm 0.006$ | $0.028 \pm 0.007$ | We see that all interpolation methods perform rather similarly on the Original grid, but linear interpolation and IDW tend to perform better on finer/coarser grids than k-NN. Since our method can be combined with essentially any interpolation technique, this leaves a user the freedom to choose a particular technique that works for his/her application, although based on our experiments linear interpolation is often accurate. We believe this ablation is a valuable addition to our manuscript and we will include these experiments together with general discussion on choosing an interpolation method into the revised manuscript. **References** [1] Neural Ordinary Differential Equations, https://arxiv.org/abs/1806.07366 [2] Neural Stochastic PDEs: Resolution-Invariant Learning of Continuous Spatiotemporal Dynamics, https://arxiv.org/abs/2110.10249 [3] Learning the Dynamics of Physical Systems from Sparse Observations with Finite Element Networks, https://openreview.net/forum?id=HFmAukZ-k-2
NeurIPS_2023_submissions_huggingface
2,023
Summary: The authors propose a grid independent model for learning PDEs from noisy experimental data. The proposed framework is probabilistic with an encoder that handles data efficiency and makes the solution grid independent. I think better differentiating from Ayed et al. and DINO would be helpful for the novelty contribution even though experimentally in these settings, the proposed model performs better. Strengths: - The problem setting is important to be able to learn PDEs from noisy and sparse observation data. - The paper is well-written with a good introduction of PDEs in the intro section. - Good overview of the related data-driven methods, such as Neural Operators (Li et al., 2021) and MeshGraphNets (Pfaff et al., 2021). - Grid independent solvers are advantages of deep learning models over numerical methods. It is a nice idea to define a neighborhood rather than use the adjacent gridpoints. - The proposed method is a nice application of the methods of lines. - The problem statement is well-defined. - It is good that the authors are considering the real-world setting of extrapolation in time. - Adding probabilistic output is also important contribution. - Synthetic and real-world datasets are both tested including a nice set of benchmark challenging datasets, e.g., Shallow water and Navier stokes. The scalar flow real-world camera dataset is also interesting. - Nice experimental setting going from a given grid to a coarser one and vice vera to finer. - The data efficiency results are nice. Weaknesses: - There is no guarantee that the PDEs learned from data will be better than the classical numerical PDE modeling so the first paragraph of the introduction could be modified a bit. - The authors should better motivate the irregularly spaced noisy and partial observations in this context. In this case, Gaussian Processes or Attentive Neural Processes could also be considered as baselines. In particular "Learning Physical Models that Can Respect Conservation Laws" [Hansen et al., ICML 2023] uses a constrained generative ANP model to perform the interpolation task from noisy sparse data to a fine grid. This work also uses a Gaussian observation model as in Eqn. 9. - There are advantages to operator-based methods vs. interpolation. I think a comparison to those methods, which are resolution-independent would be beneficial as well and should also be added to the related work section rather than just considering interpolation based methods. - Small number of comparisons to only DINo and MAgNet as baselines. Neural Operator and MeshGraphNets could also be considered as baselines. - The use of Neural ODEs is similar to the approach in "Continuous PDE Dynamics Forecasting with Implicit Neural Representations" [Yin et al., ICLR 2023]. In this work, the authors parameterize the latent space using implicit neural representations and then solve the ODE using Neural ODEs. It is good that the authors benchmark against this work in DiNO - The GNN based models such as MeshGraphNet also assume the derivatives are computed using the neighboring points as mentioned on lines 97-99. While this is important to highlight, since the approach here uses a neighborhood rather than grid points, it may be better to move this to a related work section so the novelty and design of the proposed method is only discussed in 3.1. - The model seems to be connecting various methods from different papers such as Neural ODEs and Iakovlev et al., 2023. It may be good to highlight and clarify the novelty in particular with respect to Iakovlev et al., 2023. - It looks like several heuristics need to be applied to make Neural ODEs work here. Also see "Learning continuous models for continuous physics" [Krishnapriyan, 2022]. It may be simpler to use a discrete numerical time-stepping method as done in "CROM: Continuous Reduced-Order Modeling of PDEs Using Implicit Neural Representations" [Chen et al., ICLR 2023]. This is also used in the MAgNet baseline and I don't think the only changes in the time-stepping scheme from discrete Euler to Neural ODEs (also done in DiNO) is sufficient for the novelty for publication. Forward Euler is also unstable in many cases as shown in the aforementioned Krishnapriyan, 2022 and higher order schemes like Runge-Kutta 4 (RK4) may be beneficial. - The overall model has many components and is quite complicated. I was wondering if any ablations were run to see which component is the most important. - 3-4 hours depending on the problem size and architecture is a bit slow. I think cost-accuracy comparisons to numerical solvers in the experimental section would be beneficial. - Figures 7-8 take significant space and may be moved to the appendix as ablations especially since they also show the results of the proposed method. The error plots between the predicted and exact may be easier to visualize.The scalar flow results look diffused. - I think stating 'very high accurate predictions" on line 254 is a bit strong. The magnitude of the error is still high 1e-2 in comparison to numerical solvers Minor - For notation $u_i^j$ in numerical methods, the superscript typically notates time and subscript notates spatial coordinate. - I would give Section 3 a more descriptive title than Method and also the name the proposed method as well. - Can clarify on line 226 that the true latent space dimension for Navier-Stokes and Shallow water is 3 due to the x,y components of the velocity and the scalar pressure. - Capital A on line 236 - "Similarly" to similar on line 240 - No parathesis in references on line 272. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. I know the authors mention that they use linear interpolation in the results in this paper and I was wondering if they performed any ablations with different types of interpolation. 2. It is a bit of strong assumption that the neighboring points are sufficient to compute all the derivatives. The authors mention finite differences here but higher order finite difference methods can use more grid points in the stencils than just the neighbors. Can higher-order stencils be considered here? It would also be good to add a reference for finite difference methods, e.g. LeVeque's numerical analysis book. 3. The authors mention that they test neighborhoods of various shapes and sizes. I think this is an important ablation. How ere two concentric circles chosen? The shapes are very important in numerical methods such as finite elements and those that use a Voronoi tessellation. In this case, it's a bit hard for me to see how it is purely grid independent. The number of evaluation points must also be carefully chosen. 4. Also in Equation 8, the ODEs are defined only at the gridpoints and then interpolation is performed at every time step in the ODE solver - how expensive is this? 5. Why is the distribution modeled as Gaussian and can the model be extended to support other distributions? 6. How is the temporal neighborhood size delta_T determined? Since the model performs extrapolation in time shouldn't this neighborhood only depend on the prior time steps? 7. Were simpler architectures than Transformers tested for the temporal encodings? 8. I understand that latent space dimension for the synthetic 2D datasets but for the scalar flow, how do we interpret the improvement in prediction quality for d=5 instead of d=1 despite similar metrics? Can you prove any relation between latent dimension d and the error since it seems experimentally to decay monotonically? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Limitations on the expense of the proposed method and future work are discussed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your detailed review. We address the raised concerns below. **Comparison to numerical solvers** As discussed in Section 2, we assume the dynamics are unknown, hence numerical solvers are not applicable since they require fully known system dynamics. **Comparison to other methods** We added three more methods, a baseline NeuralODE [3], and two SOTA methods as suggested by the reviewers: Neural Stochastic PDEs (NSPDE) [4], and Finite Element Networks (FEN) [5]. Please see the global rebuttal for description of the setup and results. **Model complexity, importance of model components, and ablation studies** Our method consists of multiple components. Encoder is used to infer the latent state from observations. Dynamics function is used to propagate the latent state in time. Decoder maps the latent state to parameters of the observation distribution. And multiple shooting is a parameter inference technique which accelerates and stabilizes training. Considering the above, each component has its role and is important. Regarding various model parameters, we conducted multiple ablation studies throughout our work, please see the results in Fig. 7, 13, 14, and 15. **Discrete-time models and heuristics** In our setting the temporal grids are irregular, thus discrete-time models are not applicable. Training dynamic models is hard and often requires heuristics (one-step training, progressive increasing the length of the training trajectory, multiple shooting, and other) to stabilize training. **Novelty relative to other methods.** As outlined in the Introduction, the main advantage of our model are: (i) grid-independence, (ii) space-time continuity, (iii) data-efficiency, (iv) learning from partial observations, an (v) fast and stable training. No previous method has all these features. The main differences wrt Iakovlev et al., 2023 is that they do latent ODE modeling, while we do latent PDE modeling. Also, their model is not applicable in our case since: 1) The encoder and decoder are grid-dependent and applicable only on uniform spatial grids, 2) Their dynamics function could not be adapted to the PDE setting as it assumes that every point affects every other point on the grid, which is not the case in PDEs, where each point is affected only by points within a small neighborhood. **Paper structure** Thank you for your suggestions regarding the structure of the manuscript. We will improve our work by incorporating them into the revised version. Please see global rebuttal for details. **Questions** **Q1:** Different interpolation methods. **A1:** We investigated other interpolation techniques. We added two new interpolation methods: k-NN and inverse distance weighting, and tested them on spatial grids with different resolution. Please see the global rebuttal for description of the setup and results. **Q2:** Using neighborhood values to compute derivatives, and extension to higher-order stencils. **A2:** As shown in Fig. 3, we do not use only neighbor nodes' values, instead, we define fixed spatial neighborhoods whose values depend on other nodes as well (lines 97-107). In Appendix D we investigate the effect of using spatial neighborhoods of different shapes and sizes, so extension to higher-order stencils is straightforward. **Q3:** Spatial neighborhood shape and grid-independence. **A3:** Concentric circles shape was selected because it has better coverage than e.g., a cross or a square. As we show in Fig. 14, the choice of the neighborhood shape (number of circles) does not have a strong effect on the model's performance. Also note that the spatial neighborhoods have the same shape and do not depend on the observation locations, which makes our model grid-independent as we discuss in lines 97-107. **Q4:** How expensive is interpolation? **A4:** Interpolation amounts to matrix-vector multiplication where the matrix has dimensions Cn-by-n, where n is the number of nodes, and C is a small constant that depends on the spatial neighborhood shape. For our choices of the interpolation methods the matrix is extremely sparse, hence requires negligible memory to store and the matrix-vector product can be computed efficiently. Overall, it is one of the least expensive parts of the model. **Q5:** Why is the distribution modeled as Gaussian and can the model be extended to support other distributions? **A5:** The observation distribution is Gaussian as this is a reasonable choice in the absence of any system-specific requirements. Our model is agnostic to the choice of likelihood models, and our model can be easily extended to other observation distributions by asking the decoder to output parameters of that distribution (as in Eq. 9). **Q6:** Temporal neighborhood size and dependence only on prior time steps. **A6:** The temporal neighborhood size is a hyperparameter. Using only previous time points to infer the latent state would be similar to filtering, while using also the future time points is similar to smoothing. Smoothing tends to be more accurate (see e.g., Kalman filtering/smoothing). **Q7:** Were simpler architectures than Transformers tested for the temporal encodings? **A7:** We also considered continuous-time versions of RNNs, but they are much slower than Transformers. **Q8:** Improvement of the results for larger latent space dimension d. **A8:** For the scalar flow, improvement in prediction quality for larger d can be interpreted similarly to the synthetic datasets. Setting d=1 is not sufficient since the "true state" has higher dimension. Setting d > 1 allows the encoder to infer the missing states which allows the dynamics function to model the system more accurately. Proving any relationships, beyond simply extrapolating the points to larger d, is difficult due to complexity of the system. **References** [1] https://arxiv.org/abs/1806.07366 [2] https://arxiv.org/abs/2110.10249 [3] https://openreview.net/forum?id=HFmAukZ-k-2 --- Rebuttal Comment 1.1: Title: Response to Rebuttal Comment: Thank you very much for your detailed rebuttal and additional experiments run with more baselines. The ablation on the different interpolation methods is also very interesting. I have the remaining three points: - I do agree with Reviewer 1peH that theory and convergence analysis is very important and parts of the method are very heuristic. I don't think this statement "theory to guide all our choices, but unfortunately it is not always available for neural network parameterized models, and what remains is heuristics and empirical evidence." is a strong rebuttal by the authors since we need strong theory to justify and motivate these SciML works. - I don't understand why discrete time models are not possible with irregular time steps. There are discrete adaptive time-stepping methods. - It is good that the authors added additional baselines but they did not address my question on operator based method, such as FNO, which was also raised by Reviewer yPZT on the advantages of the proposed type of interpolation method vs. operator methods that are also resolution independent. For now, I will be keeping my score, thank you.
Summary: Modeling systems with spatiotemporal evolutions, such as the ones arise in problems governed by PDEs, is challenging. This is more pronounced when the system's underlying mechanisms are too complex or unknown. The authors propose a grid-independent generative model from noisy and partial observations on irregular domains. The latent state dynamics is constructed by merging the ideas inspired by the classical numerical PDE analysis such as collocation method and the method of lines, which is discussed in section 3.1 and 3.2. They also propose a novel encoder design that operates on local spatiotemporal neighborhoods for improved data-efficiency and grid-independence. The authors apply that their model three use cases from two synthetic analysis and one from real-world datasets. Strengths: Multiple shooting analysis for training datasets from dynamical systems with long time simulations is a creative idea to enable reasonable cost of learning and avoid learning instabilities. In Appendix D, they analyzed the impact of radius and multiple shooting on the overall performance of the model to further complete the analysis. The examples provided are sophisticated enough to show that the framework can be applied to even more realistic scenarios. Weaknesses: I have two major concerns about this work: 1. There are various heuristic choices/assumptions with no concrete proofs. For example, the choice of linear interpolant to map the values at the grid points of z(t) to z(t,x) is arbitrary. Moreover, when authors write the dynamical system in form of equation 7, while they are inspired by collocation methods, they are not actually utilizing such method. Instead, they argue that the functions and its (sufficiently smooth) derivatives are a function of neighborhood locations. This is another heuristic assumption which prevents a careful convergence analysis. Can authors prove if the radius tends to zero, such solution converges to the “real” solution? In what order the error decreases? The analysis in Appendix D is in fact anecdotal and does not mathematically guarantee a desired performance. Finally, regarding multiple shooting method, while I think this is a very nice idea, it’s not novel. For example, see "50 years of time parallel time integration." By Martin, where several other methods other than multiple shooting are proposed to accelerate ODE integration. While I don’t see any fundamental error, I am not convinced that the analysis is mathematically sounds for computational adoption. 2. The topic of the paper is closely related to model discovery, which has been investigated in PDE community for quite some time now. A more appropriate comparison would be the one with SINDy ("Discovering governing equations from data by sparse identification of nonlinear dynamical systems." By Brunton et al), Neural ODE (Neural Ordinary Differential Equations by Chen et al), or auto-encoder-based discussions (Data-driven discovery of coordinates and governing equations by Champion et al). Please also note, recently people also used neural ODE with collocation points (Physics-informed neural ODE (PINODE): embedding physics into models using collocation points by Sholokhov et al). To better position this work in the larger community, a much more detailed comparison is needed. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: The fact that the number and positions of sensor (observation locations) change with time may actually help the problem at the time of inference (Reconstruction). Have authors considered a fixed setting for which the observation locations are the same? Is there a minimum number of sensors required for a certain accuracy? Are there any insights about using this approach for sensor placement problems? In equation 3, authors propose using a linear interpolation: what about other choices of interpolant? Any studies show performance of other choices? Seems to me that a linear interpolant may require a larger number of observation locations? Any insights? In equation 7, what is the order of such analysis? Can authors prove this converges to actual dynamics? Seems heuristic. By order, I mean the order of accuracy of the approximation of the derivative in terms of the distance to neighborhood points (big O s^n). In equations 12 and 13, what is the implication of assuming that the continuity inducing prior is Gaussian? Such assumption makes the solution of the problem computationally tractable but what if the actual process is non-Gaussian? What are limitations? In the results section, for shallow water the model for d=5 (and maybe even 4) outperforms fully observed MAE which is based on true d=3. Are there any insight for this? Minor comment: the notation for $\partial z/\partialx_x$ on line 56 and elsewhere with a dot is a bit obscure. Maybe more common choice for notation is to use n as the degree of differentiation. Also, A is capitalized on line 236. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your detailed review. We address the raised concerns below. **Heuristics** We would be glad to rely on theory to guide all our choices, but unfortunately it is not always available for neural network parameterized models, and what remains is heuristics and empirical evidence. An interpolation method is often chosen based on a modeling decision or empirical evidence on its performance. We chose linear interpolation since for our case it provides sufficient accuracy and computationally efficiency. However, your comment inspired us to investigate other interpolation techniques. We added two new interpolation methods: k-NN and inverse distance weighting, and tested them on spatial grids with different resolution. Please see the global rebuttal for description of the setup and results. As you correctly noted, we do not use the classical form of the collocation method. Instead, we modify it by combining it with the method of lines (see Section 3.1 for motivation). Based on intuition from finite difference formulas, using a universal neural network based function approximator and decreasing the spatial neighborhood size should improve the accuracy of derivative estimation and hence the predictive performance. However, in practice the best performance is achieved with relatively large neighborhood sizes (see Appendix, Fig. 13). The optimal neighborhood size depends on the system being modeled. Overall, this is an important part of our proposed model that guarantees good performance and efficient computation. Please note that we do not use classical multiple shooting and do not optimize over the shooting parameters. Instead, we introduce an encoder that maps observations directly to shooting states, and optimize over parameters of the encoder. As the result, the number of optimization parameters does not grow with the dataset size and spatiotemporal resolution, and no separate optimization loop is required at test time to infer the shooting states. Furthermore, Fig. 15 in Appendix shows that adoption of multiple shooting is worthwhile as it reduces training time and improves predictive performance. **Comparison to other methods** We agree that a more detailed comparison to a larger number of relevant methods is required. We added three more methods, a simple baseline NeuralODE [3], and two SOTA methods as suggested by the reviewers: Neural Stochastic PDEs (NSPDE) [4], and Finite Element Networks (FEN) [5]. Please see the global rebuttal for description of the setup and results. **Questions** **Q1:** Number and positions of sensors. **A1:** Please note that we always use fixed observation locations (we will clarify this point in the revision). Studying optimal sensor placement is outside the scope of our work. However, moving observation locations could indeed be beneficial for the model's performance, especially if they are moved to parts of the space where more accurate resolution is required. **Q2:** Equation 3, other interpolation methods and number of observation locations. **A2:** We added results for other interpolation methods (see above). Other study ([1], Fig. 6) compared different types of interpolants, including learned ones, and showed little difference relative to the linear interpolation. Given a fixed number of observation locations, different interpolation techniques have different trade-offs between accuracy and computational efficiency. Linear interpolation is more on the efficiency side, but as we show in Fig. 8 and Table 1 above it performs well even on very coarse spatial grids. **Q3:** Equation 7 and convergence to true dynamics. **A3:** The fact that our model makes accurate predictions implies that the dynamics it learns are close to the true dynamics of the system. Giving theoretical convergence guarantees is challenging due to complexity of the model. **Q4:** Implication of assuming that the continuity inducing prior is Gaussian. **A4:** The purpose of the continuity prior is to bring the shooting and the system states close to each other. Assuming Gaussian prior implies that closeness is measured in terms of the squared distance. Note that the underlying process that all neural PDE methods aim to learn is a continuous-time deterministic PDE, and the sole motivation of multiple shooting is to introduce an approximation that converts the original problem into a form that enables efficient and robust optimization. In other words, the choice of the continuity prior corresponds to a choice of a multiple shooting approximation. **Q5:** Model performance for different latent state dimensions. **A5:** Based on our experience, the best performance in latent-state dynamic models is achieved for latent space dimensions larger than the true system state dimension. Similar observations were made in other works, e.g., [2]. A possible explanation for this is that in larger latent spaces simpler dynamics might be sufficient to model the system's behavior. **References** [1] MAgNet: Mesh Agnostic Neural PDE Solver, https://arxiv.org/abs/2210.05495 [2] Non-linear State-space Model Identification from Video Data using Deep Encoders, https://arxiv.org/abs/2012.07721 [3] Neural Ordinary Differential Equations, https://arxiv.org/abs/1806.07366 [4] Neural Stochastic PDEs: Resolution-Invariant Learning of Continuous Spatiotemporal Dynamics, https://arxiv.org/abs/2110.10249 [5] Learning the Dynamics of Physical Systems from Sparse Observations with Finite Element Networks, https://openreview.net/forum?id=HFmAukZ-k-2
null
null
null
null
Iterated Deep Q-Network: Efficient Learning of Bellman Iterations for Deep Reinforcement Learning
Reject
Summary: This paper focuses on learning the projection of the empirical Bellman operator's iteration on the space of function approximator (neural model). This being done through increasing the number of gradient steps using multiple heads with a certain form of update. While retaining the same total number of gradient steps and samples compared to common approaches, the proposed method seems to provide better results (at the cost of retaining multiple heads and more computation). Strengths: The idea is interesting, novel and practical. The paper also **experimentally** shows noticeable improvement over various baselines. Weaknesses: - The presented method is quite simple and could have been presented much more efficiently with simple math and direct explanation rather than lengthy discussions over multiple figures. - The choice of $K$ seems to have a significant impact on the behaviour, which also varies depending on the domain. Suggestion: some formal analysis (e.g., the algorithm's variance) could be useful to provide better insight about what to expect from larger $K$ in terms of other characteristics such as the transition kernel. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - The main idea of the paper is simple: using multiple heads for a Q network and compute the Bellman loss for each of them using its preceding head’s target, which provides multiple grad steps at once for a given transition (also performs the rolling step more frequently). This has been presented through a long (and figure-based) discussion. While I did like the figure-based discussion of the “related work” section, I believe a set of equations can be far more efficient to explain the method and compare it against other techniques. Honestly, looking at Eq 2 with a short paragraph of explanation is sufficient (and way better) to understand what is going on. The current form is ponderous and hard to follow. - In section "Preliminaries": the policy is assumed to be **deterministic**. Why? (this is a significant limitation and calls for more discussion.) - It’s been said both in the abstract and section 4.1 that iDQN concern the future Bellman iterations. Looking at the loss function (Eq 2), all the $Q_k$’s corresponding loss terms are computed using the same transition tuple. I do not see why it has anything to do with “future Bellman iterations”. - I liked figures 1 & 2 with the corresponding discussion of section 3. One missing point here is the value-mapping idea: to functionally map $Q$ into a difference space and perform the learning, then map back to the space of $Q$. This basically alters the actual $Q$ space and is inherently different from choosing a different approximator. Hence, the approximator’s space boundary may retain better properties such as remaining homogeneously close to the $Q^*$ for a larger subset of states. See this paper [and possibly others]: https://arxiv.org/abs/2203.07171 note that this paper also discusses orchestration of multiple mappings... - For the ensemble techniques, this paper is of particular interest, which also provides a neat way of diversification (hence, requiring smaller ensemble size): https://arxiv.org/abs/2110.01548. Other comments: - L78--79 --> “... it is at least $\gamma$ closer ...” --> if being picky, this is not accurate. Rather, the distance is shrunk by $\gamma$ (as a multiplier, not a subtractor). - Beginning of L86 -->. “In the optimal case...” --> what do you mean? - Eq 2 --> what is $r’$ ? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: While the authors mentioned at the end of Introduction that "We conclude the paper by discussing the limits of iDQN ...", they apparently forgot to do so! No discussion of limitations is provided. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the extensive feedback. It seems that this work has raised questions we are happy to discuss. **Weaknesses** > 2. The choice of $K$ [...] Several reviewers have raised this point. We kindly ask the review to refer to point $I$ of the general answer that addresses this specific point. **Questions** > 1. The main idea of the paper is simple: [...] We thank the reviewer for pointing this out. We understand that the reviewer suggests us to clarify Section $4$. This is why we propose a new version in which the core idea is explained before discussing the benefit of iDQN. We believe the following can increase the understandability of Section $4$: We propose an approach built on top of DQN. In practice, this new idea consists in changing the loss of DQN such that it includes $K$ temporal difference errors instead of one: $\mathcal{L}(s, a, r, s' \vert \theta) = \sum_{k=1}^K \left(Q_k(s, a \vert \theta) - r - \gamma \max_{a'} \bar{Q}_{k-1}(s', a' \vert \bar{\theta}) \right)^2$ where $\theta$ is the online parameters and $\bar{\theta}$ the target parameters. The $k^{th}$ learned $Q$-function corresponds to the $k^{th}$ Bellman iteration and is denoted $Q_k$. The way the loss is computed from the neural network’s architecture is presented in Figure $4a$. One can see how the $Q$-functions are chained one after the other to learn the Bellman iterations. In iDQN, updating the target networks does not bring the target parameters to the next Bellman iteration like in DQN. It simply refines their positions to be closer to the online networks to allow better estimates of the iterated $Q$-functions. To go further in the Bellman iterations, we periodically consider a new online $Q$-function and discard the first target network in the sense that the index $k$ in the loss would now go from $2$ to $K+1$. We call this procedure a rolling step. In practice, the rolling step is simple to implement. A new head to the neural network of Figure $4a$ is added, with the index $K + 1$, and the first head is removed. It leads us to introduce a new hyper-parameter that indicates at which frequency the rolling step is performed. It is worth noticing that if K is set to $1$ and if the rolling step frequency is synchronized with the target update frequency in DQN, then we recover DQN, i.e., iDQN with $K = 1$ is equal to DQN. This main idea emerges naturally from the representation developed in Section $3$. In DQN, Figure $1$ illustrates that to learn $2$ Bellman iterations, we first need to wait until the first iteration is learned, and then we need to update the target before starting to learn the second iteration. Conversely, we propose to use a second online network that learns the second Bellman iteration while the first Bellman iteration is being learned. The target for the second online network is created from a second target network that is frequently updated to be equal to the first online network. Figure 3a shows how iDQN behaves in the space of $Q$-function. It is important to understand that in iDQN with $K = 2$, both online networks are learned at the same time. As explained earlier in this section, we can learn a following Bellman iteration by adding a new online network $Q_3$ that would use a new target network $\bar{Q}_2$ set to be equal to the last online network $Q_2$ as shown in Figure 3b. In the meantime, the target and online network are discarded to keep memory usage constant. In practice, the choice of $K$ can be increased until memory usage becomes an issue. In DQN, the actions are ...[Continuing with Section $4$ as it is in the submission.] > 2. In section "Preliminaries": [...] The policy can be stochastic. We will write "A policy" and remove the term "deterministic". > 3. It’s been said both in the abstract and [...] All $Q_k$'s in the loss are trained with the same samples. This is a key advantage of the proposed method (iDQN): each $Q$ function is trained with more samples, or putting it differently, the samples are used more often. It is important to point out that the samples are not related to the Bellman iterations. They are generated by the behavioral policy (see Section 5.1, line 251). The Bellman iterations are learned by a pair of target $Q$ functions and an online $Q$ function whose index indicates the Bellman iteration count. In DQN, only one Bellman iteration is trained at a time through the online $Q$ function. However, iDQN does the same and learns the $K - 1$ following Bellman iterations through the $K - 1$ online $Q$ functions added 'after' the first online network. The pairs of target and online $Q$ functions are attached together to form a chain, as it is shown in Figure 3a. > 4. I liked figures 1 & 2 [...] We thank the reviewer for pointing us to this interesting work. We will add it to the related work section. > 5. For the ensemble techniques, [...] We thank the reviewer for broadening our scope of related works. We will happily add it to the related work section. Other comments: > L78--79 --> [...] We agree with the reviewer the proposed version is more accurate. We will add it to the final version of the paper. > Beginning of L86 -->. “In the optimal case...” --> what do you mean? For each Bellman iteration, the goal is to train the online network to be as close as possible to the target computed from the target network. The equality is unlikely because the target can be located outside of the space of representable functions (the space of representable $Q$ functions is shown in green in all figures). This is why 'in the optimal case', the online network is located at the projection of the target on the space of representable $Q$ functions. > Eq 2 --> what is $r'$? This is a mistake. Thank you for pointing it out. It should be $r$. We will add it to the final version of the paper. --- Rebuttal Comment 1.1: Comment: I appreciate the authors' feedback and serious consideration. I am willing to increase the score. > choice of $K$ + newly added theory: Thanks for the new plots. Together with the formal exposition, it's a significant improvement. A couple of points: * I noticed $k$ goes from 0 to $K$, meaning $K+1$ in total; I think this is a typo. In the theorem, it's from 1 to $K$ though. * In my view, this sentence: "DQN minimizes only one term of the bound, while iDQN minimizes the whole sum of terms we have influence over" is actually what summarizes iDQN the best. * I cannot wrap my head around Figure E (in the submitted PDF). The x-axis is $k$, which should have gone from 1 to $K$ for each colour, right? Say orange should go from 1 to 10, but it looks like all colours go all the way to 20! Can you clarify? > The main idea of the paper is simple... Suggestion (minor): The sentence is a bit confusing. To avoid confusion with n-step learning, perhaps say it like this: "... a particular ensemble of K one-step temporal difference instead of just one". --- Reply to Comment 1.1.1: Comment: We thank the reviewer for the valuable suggestions. We are glad to see that our rebuttal has been appreciated. > 1. I noticed $k$ goes from $0$ to $K$, meaning $K + 1$ in total; I think this is a typo. In the theorem, it's from $1$ to $K$ though. It is actually wanted. To learn $K$ Bellman iterations, we need $K + 1$ $Q$-functions. There are indeed $K$ terms in the loss but the first term uses $Q_0$ ($Q_{k - 1}$ with $k = 1$). > 3. I cannot wrap my head around Figure $E$ (in the submitted PDF). The x-axis is $k$, which should have gone from $1$ to $K$ for each colour, right? Say orange should go from $1$ to $10$, but it looks like all colours go all the way to $20$! Can you clarify? The Reviewer is correct. We simply repeat the process for every experiment until we reach $20$ Bellman iterations so that each experiment has the chance to perform the same amount of Bellman iterations. More precisely, after a pre-defined number of gradient steps, we only keep the last $Q$-function and learn the $K$ following Bellman iterations by duplicating the kept $Q$-function $K$ times and by using the loss of iDQN. > 4. Suggestion (minor): The sentence is a bit confusing. To avoid confusion with n-step learning, perhaps say it like this: "... a particular ensemble of K one-step temporal difference instead of just one". Thank you for the suggestion. We agree that this new version is more effective: We propose an approach built on top of DQN. In practice, this new idea consists in changing the loss of DQN such that it is composed of a particular ensemble of $K$ one-step temporal difference instead of one: ... We are glad that the Reviewer is willing to increase the score. However, we see that the score is still the original one. Please let us know if more clarifications are needed for increasing the score.
Summary: This paper presents Iterated Deep Q-Network (iDQN), a new DQN-based algorithm that incorporates multiple Bellman iterations into the training loss. The paper highlights the limitations of traditional RL methods that only consider a single Bellman iteration and proposes iDQN as a solution to improve learning. The algorithm leverages the online network of DQN to build targets for successive online networks, taking into account future Bellman iterations. The paper evaluates iDQN against relevant baselines on 54 Atari 2600 games and demonstrates its benefits in terms of approximation error and performance. Strengths: 1. The proposed iDQN algorithm introduces a novel approach to incorporate multiple Bellman iterations into the training loss, addressing the limitations of traditional RL methods. 2. The paper provides a well-structured review of relevant literature, discussing the behavior of various Q-learning algorithms in the space of Q-functions. This analysis helps in understanding the motivation behind iDQN and its relationship with other methods. 3) The empirical evaluation on selected Atari games demonstrates the superiority of iDQN over its closest baselines, DQN and Random Ensemble Mixture. This provides empirical evidence of the effectiveness of the proposed approach. Weaknesses: 1. It would be helpful if the paper included more comparisons with widely-known baselines in the field. While the paper compares iDQN to DQN and Random Ensemble Mixture, it would be valuable to see how iDQN performs against other popular RL algorithms like R2D2. 2. Some parts of the paper could be further clarified to improve the reader's understanding. For example, the explanation of the loss function and the graphical representations of DQN variants could be made more concise and intuitive. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: 1. Can you demonstrate the performance table of Atari 26/54 games? 2. Can you continue to improve the demonstration of Figure 1,2,3? It's difficult to understand what they are demonstrating. 3. Will you provide code or implementation details? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: It would be helpful if the paper included more comparisons with widely-known baselines in the field. This paper has no negative social impacts. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the valuable feedback. **Weaknesses** > 1. It would be helpful if the paper included more comparisons with widely-known baselines in the field. While the paper compares iDQN to DQN and Random Ensemble Mixture, it would be valuable to see how iDQN performs against other popular RL algorithms like R2D2. Several reviewers have raised this point. We kindly ask the review to refer to point $II$ of the general comment, which tackles this specific point. **Questions** > 1. Can you demonstrate the performance table of Atari 26/54 games? In the supplementary materials, we provide the learning curves of iDQN along with other methods on all games. We chose to represent the performance profiles to show the distribution of the final scores as recommended by [1]. If the reviewers think it adds clarity, we can gladly add the table of the final scores of iDQN and other relevant DQN methods in addition to the learning curves and the performance profiles. > 2. Can you continue to improve the demonstration of Figure 1,2,3? It's difficult to understand what they are demonstrating. We are highly interested in increasing the clarity of the figures. We believe the understandability of the figures can be improved from their caption. Please find updated versions of the figure captions that we would like to add in the final version of the paper: Figure 1: Graphical representation of DQN in the space of $Q$-functions $\mathcal{Q}$. The space of parameterized $Q$-functions $\mathcal{Q}\_{\Theta}$ is shown in green. The optimal $Q$-function $Q^*$ is in most cases not finitely representable, hence not belonging to $\mathcal{Q}\_{\Theta}$. DQN makes use of a target network $\bar{Q}\_{k-1}$ to learn its optimal Bellman iteration $\Gamma^* \bar{Q}\_{k-1}$, also called target, with an online network $Q_k$. Each iteration is learned by minimizing the distance between the target $\Gamma^* \bar{Q}_{k-1}$ and the online network $Q_k$ (see the red line). (a) Starting from a random $Q$-function $\overline{Q}_0$, the first Bellman iteration is learned via an online network $Q_1$ using stochastic gradient descent. (b) After a predefined number of gradient steps, the online network is frozen and is used as a target network, noted $\overline{Q_1}$, to learn the second Bellman iteration. $Q_2$ is the online network learning the second Bellman iteration. Figure 2: Graphical representation of DQN variants in the space of $Q$-functions $\mathcal{Q}$. (a) One common way of improving DQN is to develop a more efficient empirical Bellman operator, denoted $\tilde{\Gamma}$. This can lead to more stable behavior, hence easing the learning process. Another way is to modify the space of representable Q-functions, denoted $\tilde{\mathcal{Q}}_{\Theta}$, so that the optimal $Q$-function $Q^*$ is closer to this space, increasing the possibilities of learning a $Q$-function being closer to $Q^*$. (b) To better explore the space of representable $Q$-functions $\mathcal{Q}_{\Theta}$, REM keeps in memory $3$ target networks $\overline{Q}_0^0$, $\overline{Q}_0^1$ and $\overline{Q}_0^2$ and $3$ online networks $Q_1^0$, $Q_1^1$ and $Q_1^2$. Similarly to DQN, at each gradient step, REM uses a target $Q$-function $\overline{Q}_0^{\alpha}$ computed as a random convex combination of the stored target networks. This target $Q$-function learns its optimal Bellman iteration $\Gamma^*\overline{Q}_0^{\alpha}$ with an online network $Q_1^{\alpha}$ computed from the same convex combination of the stored online networks. Figure 3: Graphical representation of iDQN in the space of $Q$-functions denoted $\mathcal{Q}$. iDQN makes use of a target networks $\overline{Q}$ to learn their optimal Bellman iterations $\Gamma^*\overline{Q}$, also called target, with an online network $Q$. Each iteration is learned by minimizing the distance between the target $\Gamma^*\overline{Q}_{k-1}$ and the online network $Q_k$ (see the red lines). The update target frequency regulates the distance between the target network and the online network corresponding to the same Bellman iteration (shown in dotted points). In this Figure, iDQN learns $2$ Bellman iterations at once. (a) In this example, iDQN starts by learning the $2$ first Bellman iterations. This Figure depicts what happens after all the networks have been initialized randomly and after a few gradient steps and updates of the target parameters have been done. (b) A rolling step is performed to learn the third Bellman iteration. This is been done by incrementing all the indexes by one unit. As a result, the first target network $\overline{Q_0}$ is discarded, and a new online network $Q_3$ is learned. > 3. Will you provide code or implementation details? Please find the code and the implementation details in the supplementary materials. We plan to publish the code via a GitHub link upon acceptance. [1] Agarwal, R., Schwarzer, M., Castro, P. S., Courville, A. C., and Bellemare, M. Deep reinforcement learning at the edge of the statistical precipice. Advances in Neural Information Processing Systems, 34, 2021. --- Rebuttal Comment 1.1: Title: RE: Comment: The author's rebuttal appears to address your concerns. Can you clarify if you have additional questions/concerns?
Summary: The paper considers the problem of how to get accurate approximations of optimal Q-functions. The paper introduces a new algorithm called iterated DQN (iDQN). iDQN incorporates multiple consecutive Bellman iterations into the training process, which aims to allow for better approximation of optimal action-value functions. It uses the online network of DQN to build a target for a second online network, and so on, for considering future Bellman iterations. The authors conducted several experiments based on Atari games by comaping iDQN with baseline methods, including DQN and Random Ensemble Mixture. Strengths: - Significance and Originality: The topic that the paper studies - how to learn the Bellman iterations efficiently, is an important topic in reinforcement learning. The authors propose a simple yet effective method, called iterated DQN, to improve the learning efficiency. Specifically, iterated DQN uses a second online Q-network for learning the second Bellman iteration simultaneously, where the target for the second online Q-network is created from a second target network according to the first online network. In this way, the loss can include K-1 more terms compared to the original loss used in DQN. The way for using such kind of ensemble seems novel, which allows for improved efficiency. - Clarity: The paper is well-written and easy to follow, with very clear illustrations for the update for DQN, REM, and iterated DQN as in Figures 1-4. Weaknesses: - Quality: The paper presents a simple yet effective idea, but it could be further strengthened particularly in theoretical and empirical analysis. First, the authors could provide a theoretical guarantee for iterated DQN by examining its convergence speed, in addition to the intuitive explanation given in Section 4.1. This would lend credibility to their claims. Second, the empirical validation raises concerns, as iterated DQN's performance is only marginally better than that of previous baseline methods such as DQN (Adam), C51, and REM. This modest improvement does not strongly support the paper's assertions. Lastly, it would be beneficial for the authors to include a memory comparison, as employing more Q-networks may lead to increased memory costs, which is an important consideration for practical applications. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. Can the authors also develop a theoretical guarantee for iterated DQN by analyzing its convergence speed besides the intuitive explanation in Section 4.1? It would be better if the claim is also demonstrated theoretically. 2. My main concern for the paper is about the empirical validation part. Iterated DQN actually performs very closely to previous baseline methods including DQN (Adam), C51, and REM, with quite small improvement margin, which does not support the claim in the paper well. 3. Can the authors also show the memory comparison, as using more Q-networks can induce a larger memory cost? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: The authors have discussed some of the limitations by considering other value-based methods. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the feedback and insightful questions. > 1. Can the authors also develop a theoretical guarantee for iterated DQN by analyzing its convergence speed besides the intuitive explanation in Section 4.1? It would be better if the claim is also demonstrated theoretically. Several reviewers have raised this point. Please refer to point $I$ of the general comment that tackles this point. > 2. My main concern for the paper is about the empirical validation part. Iterated DQN actually performs very closely to previous baseline methods including DQN (Adam), C51, and REM, with quite small improvement margin, which does not support the claim in the paper well. Several reviewers have raised this point. Please refer to point $II$ of the general comment that tackles this point. > 3. Can the authors also show the memory comparison, as using more Q-networks can induce a larger memory cost? Suppose we note $C$, the memory necessary to store the parameters for the convolutional layers, and $F$, the memory used for the parameters of the fully connected layers. DQN needs $2 (C + F)$, the $2$ comes from the fact that there is a target and an online network. For iDQN, the memory used is $2 (2C + (K + 1)F)$. There is a target and an online network as well. This is why there is a $2$ on the left. Then, as shown in the supplementary material in Figure D, $2$ sets of convolutional parameters are stored along with $K + 1$ heads. More precisely, the classical architecture used in Atari games requires $16$ MB of memory while iDQN with $K = 5$ requires $92$ MB and $168$ MB for $K = 10$. It is worth noticing that those quantities are negligible compared to the space the replay buffer needs. It can reach several GBs even with some memory optimization tricks. We would be glad to add this analysis to the final version. --- Rebuttal Comment 1.1: Title: RE: Comment: Did the authors point II address your primary concern about the empirical validation?
Summary: This work proposes an extension to DQN aimed at improving projection steps in Q value updates. There are two main contributions of the paper: - The paper intuitively explains the Q-value learning characteristics of DQN variants caused by a mismatch between the optimal Bellman operator and the set of representable Q functions. - The authors propose the iterated DQN (iDQN) method, which keeps track of a collection of K online Q-functions. When updating these Q networks, the previous Q function is used as the target network. Experiment results show that iDQN outperforms a collection of DQN variants on the standard Atari benchmark. Further ablation studies explore the effect of K and sampling strategies for iDQN and conclude that bigger K and uniform sampling of Q networks are in general preferable. Strengths: - The figures explaining the projection characteristics of existing DQN variants are very intuitive and provide excellent motivation for the proposed method. - The iDQN method, the newly introduced hyperparameters, and the experiment settings are communicated clearly and transparently. - The results on Atari are convincing. - The ablation studies on the choice of K and sampling method are insightful. Weaknesses: - The figures used to explain the projection behaviors of DQN variants are created for illustration, but not from actual experiments. - As discussed by the authors, iDQN doesn't outperform more recent DQN variants which employ other tricks to improve performance. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: - The online Q networks are known to be unstable during training. Could later Q networks suffer from compounding stability issues as they use previous online Q networks as targets? - Connecting back to weakness #1, is it possible to create a visualization of the projection behaviors in some toy environments? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: The authors discussed how iDQN is not able to outperform more recent DQN variants using other tricks. It would also be nice if the authors can discuss the training stability of iDQN. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the useful suggestions and comments. **Weaknesses** > 2. As discussed by the authors, iDQN doesn't outperform more recent DQN variants which employ other tricks to improve performance. Several reviewers have raised this point. We kindly ask the review to refer to point $II$ of the general answer that addresses this specific point. **Questions** > 1. The online Q networks are known to be unstable during training. Could later Q networks suffer from compounding stability issues as they use previous online Q networks as targets? Thank you for pointing this out. This is precisely why we use a target network so that the targets are not updated as frequently as the online networks. This is enough to avoid instability in practice. More precisely, we update the targets every $30$ training steps. > 2. Connecting back to weakness #1, is it possible to create a visualization of the projection behaviors in some toy environments? We understand the point raised by the reviewer. In the additional PDF file, we added an additional experiment on a toy offline problem: Linear Quadratic Regulator (see Figure G). In this problem, the state and action spaces are continuous and one-dimensional. The dynamics are linear: for a state $s$ and an action $a$, the next state is given by $s' = 0.8 s - 0.9 a$, and the reward is quadratic $r(s, a) = 0.5 s^2 + 0.4 sa -0.5 a^2$. We choose to parametrize the space of $Q$-function with $2$ parameters $(M, G)$ such that, for a state $s$ and an action $a$, $Q(s, a) = M a^2 + G s^2$. To avoid having a too large space of representable $Q$-functions, we constrain the parameter $M$ to be negative and parameter $G$ to be between $-0.4$ and $0.4$. Starting from some initial parameters, we performed $30$ gradient steps with a learning rate of $0.05$ using the loss of DQN and iDQN. Both figures show the space of representable $Q$-function $\mathcal{Q}_{\Theta}$ in green, the optimal $Q$-function $Q^*$, the initial $Q$-function $Q_0$ and its optimal Bellman iteration $\Gamma^* Q_0$. The projection of the optimal Bellman iteration is also shown with a dotted line. The figure on the left shows where the online network $Q_1$, computed from DQN, is after $30$ gradient steps. The figure on the right shows where the online networks $Q_1$ and $Q_2$ of iDQN are after $30$ gradient steps. As we claim in the paper, iDQN finds a $Q$-function $Q_2$ closer to the optimal $Q$-function $Q^*$ than $Q_1$ found by DQN. The figure on the left closely resembles Figure 1a. Likewise, the figure on the right looks like Figure 3a, showing that the high-level ideas presented in the paper are actually happening in practice. --- Rebuttal Comment 1.1: Comment: Thanks for adding the additional experiment! The toy example makes sense and supports the motivation of the paper. --- Reply to Comment 1.1.1: Comment: We are glad to see that the concerns raised by the reviewer have been cleared by our responses and additional experiments.
Rebuttal 1: Rebuttal: **General comment to all reviewers.** We thank all the reviewers for their valuable feedback. For each reviewer, we address their concern with specific answers. Here, we summarize the common points, describe the content of the rebuttal PDF in the attachment, and provide answers that we will add to the final version of the paper. ### $I.$ Theoretical analysis of iDQN has been requested by several reviewers (asked by reviewers VNTR, G4ak, and UiiX). We thank the reviewers for pointing out the value of providing a theoretical justification for our proposed iDQN algorithm. Indeed, although we focus mainly on empirical evaluation, it is possible to make a statement about why iDQN should, in principle, be expected to improve upon the performance of DQN. Namely, we can invoke Theorem 3.4 from [1] on error propagation for Approximate Value Iteration (AVI): **Theorem 3.4 [1].** Let $K \in \mathbb{N}^*$, $\rho$, $\nu$ two distribution probabilities over $\mathcal{S} \times \mathcal{A}$. For any sequence $(Q_k)\_{k=0}^K \subset B \left(\mathcal{S} \times \mathcal{A}, R_{\gamma} \right)$ where $R_{\gamma}$ depends on reward function and discount factor, we have $|| Q^* - Q^{\pi_K} ||\_{1, \rho} \leq C_{K, \gamma, R_{\gamma}} + \inf_{r \in [0, 1]} F(r; K, \rho, \gamma) \left(\sum_{k=1}^{K} \alpha_k^{2r} || \Gamma^*Q_{k - 1} - Q_k ||_{2, \nu}^{2} \right)^{\frac{1}{2}}$ where $\alpha_k$ and $C_{K, \gamma, R_{\gamma}}$ do not depend on the sequence $(Q_k)_{k=0}^K$. Function $F(r; K, \rho, \gamma)$ depends on the concentrability coefficients of the greedy policies w.r.t. the value functions. In simpler words, this theorem bounds the approximation error at each iteration by a term that includes the sum of approximation errors until the current timestep, i.e., $\sum_{k=1}^{K}\alpha_k^{2r} || \Gamma^*Q_{k - 1} - Q_k ||_{2, \nu}^{2}$. It can be seen that at iteration $k$, the DQN loss $( r + \gamma \max_{a'}Q_{k-1}(s', a') - Q_k(s, a) )^2$ is an unbiased estimator of the approximation error $|| \Gamma^* Q_{k - 1} - Q_k ||$. Likewise, the iDQN loss $\sum_{k=1}^K (r + \gamma \max_{a'}Q_{k-1}(s', a') - Q_k(s, a) )^2$ is an unbiased estimator of the sum of approximation errors $\sum_{k=1}^K || \Gamma^*Q_{k - 1} - Q_k ||$. From there, one can see that at each gradient step, DQN minimizes only one term of the bound, while iDQN minimizes the whole sum of terms we have influence over. Hence, at each gradient step, iDQN can lower the approximation error bound more than DQN. We would be glad to clarify this further if needed and to include it in the final version of the paper. We complement this theoretical analysis with an empirical evaluation of a low-dimensional offline problem Car-On-Hill [2], where the agent needs to drive an underpowered car to the top of a hill. It has a continuous state space and two possible actions: moving left or right. In this problem, the optimal value function $V^*$ can be computed via brute force [2]. Figure E in the rebuttal PDF shows the distance between the optimal value function $V^*$ and $V^{\pi_i}$, i.e., the value function of the greedy policy of the current action-value function estimate obtained with iDQN. This distance is plotted according to the Bellman iterations computed during the training for several values of $K$. We recall that iDQN with $K = 1$ is equivalent to DQN or, more precisely, FQI since it is an offline problem. The plot clearly shows that for higher values of $K$, iDQN performs better because it reaches lower approximation errors earlier during the training. This relates to the theorem previously described. By increasing the value of $K$, we increase the number of Bellman iterations taken into account for each gradient step. Hence we decrease the upper bound on the distance between the optimal value function and the current estimate, which is what is happening in the plot. ### $II.$ iDQN is not outperforming all considered variants of DQN (asked by reviewers VNTR, A74j, G4ak, and PHSC). In this work, we focused on validating that iDQN outperforms DQN. Since iDQN is orthogonal to all other proposed improvements of DQN, one can expect that adding them to iDQN would further improve the results. Due to the high computational cost of running Atari games, we could not do a thorough analysis of these combinations yet. In our opinion, this would be the content of future work, similar to what Hessel et al. did in [3]. However, in the rebuttal PDF file, we provide some results that combine iDQN with $K=3$ and Implicit Quantile Networks (IQN) (see Figure F). In the two considered Atari games, iIQN (i.e., iDQN + IQN) outperforms iDQN and IQN, showing that not only is it technically feasible to combine those two algorithms but that it can yield better performance. Interestingly, iIQN even outperforms IQN + 3-step return. We would gladly include these results in the paper and release the code of iIQN upon acceptance. ### $III.$ Figures 1, 2, and 3 are illustrations not made from experimental results (asked by reviewer A74j). This point only concerns one reviewer. Therefore, we only write the answer to reviewer A74j. Nonetheless, we invite reviewers interested in this point to read the answer. [1] Farahmand, A. M. (2011). Regularization in Reinforcement Learning (Doctoral dissertation, University of Alberta). [2] Ernst, D., Geurts, P., and Wehenkel, L. Tree-based batch mode reinforcement learning. JMLR, 6:503–556, dec 2005. ISSN 1532-4435. [3] Hessel, M., Modayil, J., Van Hasselt, H., Schaul, T., Ostrovski, G., Dabney, W., Horgan, D., Piot, B., Azar, M., and Silver, D. Rainbow: Combining improvements in deep reinforcement learning. In Proceedings of the AAAI conference on artificial intelligence, volume 32, 2018. Pdf: /pdf/0bacd50f546e1400db2a334edea07e218fb6ed15.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: In this paper, a new variant of DQN algorithm, iDQN, is proposed by replacing the classical Bellman iteration with several consecutive Bellman iterations and using multiple Q networks. Intuitively, this new Bellman operator propogates reward sigals more efficiently thus speeds up learning, with the cost of more computation and memory. As the number of consecutive Bellman iterations increases, it is shown that the learning performance of iDQN in Atari games is also increased. Strengths: As far as I know, the proposed method is novel. The new algorithm, together with several baselines, are tested in 54 Atari games. Many illustrative pictures are included to help understand the new Bellman operator. The paper is generally well-written. Weaknesses: 1. It will make this work much better if a theoretical analysis of the proposed Bellman operator is provided, such as convergence speed, the affect of the number of Q networks, etc. 2. The performance of iDQN is only slightly better than baselines, such as DQN(Adam). A summarized result (e.g. the first figure in [DQN Zoo](https://github.com/deepmind/dqn_zoo)) can make the comparison in Atari games much clearer. 3. Missing related works about ensemble methods + DQN, e.g. Averaged DQN and Maxmin DQN. 4. Misssing baselines: DQN + n-step return. Both iDQN and this baseline try to accelerate the propogations of reward signals. Furthermore, although it is mentioned that "We do not consider other variants of DQN to be relevant baselines to compare with.", more explanations are necessary. 5. It is claimed that "Our approach introduces two new hyperparameters, rolling step frequency and target parameters update frequency, that need to be tuned. However, we provide a thorough understanding of their effects to mitigate this drawback. ". However, I don't find the understanding thorough enough. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: See Weaknesses. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: The limitations of iDQN are that it takes more memory and computation than DQN. Jax is used to parallelize the computation, making the training time increase acceptable. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the insightful comments and feedback. > 1. It will make this work much better if a theoretical analysis of the proposed Bellman operator is provided, such as convergence speed, the affect of the number of Q networks, etc. Several reviewers have raised this point. Therefore, we provided a general answer in point $I$ of our overall rebuttal response. Please refer to that response. > 2. The performance of iDQN is only slightly better than baselines, such as DQN(Adam). A summarized result (e.g. the first figure in DQN Zoo) can make the comparison in Atari games much clearer. It seems that there has been a misunderstanding. Figure 5a shows the same plot as in DQN Zoo with the Interquartile Mean (IQM) instead of the median as recommended by [1]. As described in [1], the IQM removes the worst 25% and the best 25% of the scores before taking the mean to avoid the influence of outlayers while keeping more than one point. Figure 10a shows the same plot with more variants of DQN. It is worth noting that all DQN variants can be combined with our approach (iDQN). Therefore, outperforming DQN is the key step. In future work, we plan to evaluate other improvements on top of iDQN, but experiments on Atari take some time. > 3. Missing related works about ensemble methods + DQN, e.g. Averaged DQN and Maxmin DQN. Thank you for pointing this out. In case of acceptance, we will add those works in the related works section. > 4. Missing baselines: DQN + n-step return. Both iDQN and this baseline try to accelerate the propogations of reward signals. Furthermore, although it is mentioned that "We do not consider other variants of DQN to be relevant baselines to compare with.", more explanations are necessary. Several reviewers have raised this point. Please refer to point $II$ of the general answer that addresses this specific point. > 5. It is claimed that "Our approach introduces two new hyperparameters, rolling step frequency and target parameters update frequency, that need to be tuned. However, we provide a thorough understanding of their effects to mitigate this drawback. ". However, I don't find the understanding thorough enough. We thank the reviewer for raising this point. We truly believe that we can provide some intuition on how to set these hyperparameters. First, the rolling step frequency defines the speed at which we learn the Bellman iterations. Problems in which the environment is highly stochastic will require more gradient steps to learn a Bellman iteration hence the need to increase the rolling step frequency. Conversely, problems with sparse rewards and long horizons will be faster to learn with a high rolling step frequency because more Bellman iterations are needed to reach good performance. Second, the target update frequency indicates the speed at which the target networks follow the online networks. Once again, highly stochastic problems will benefit from having a small target update frequency since the positions of the online networks are more likely to be noisy. Conversely, problems with sparse rewards and long horizons can benefit from having the target networks closely following the online networks. We will add this clarification in the final version of the paper. [1] Agarwal, R., Schwarzer, M., Castro, P. S., Courville, A. C., and Bellemare, M. Deep reinforcement learning at the edge of the statistical precipice. Advances in Neural Information Processing Systems, 34, 2021. --- Rebuttal Comment 1.1: Title: Further comment Comment: Thank you for your response. I'm willing to increase the score from 4 to 5 since theoretical analysis is provided. However, I still have a concern about the effectiveness of the proposed method compared to DQN + n-step return, given that the proposed method uses more computation and memory resources. --- Reply to Comment 1.1.1: Title: Clarification on the link between iDQN and $n$-step return Comment: We thank the reviewer for considering our answer and increasing the score. We understand the concern of the reviewer. At first sight, it seems that $n$-step return and iDQN yield similar benefits but it is not the case. iDQN is a method that allows for better learning of each Bellman iteration without increasing the total number of gradient steps. This advantage comes from the loss that takes into account several Bellman iterations at each gradient step. However, $n$-step return provides a different way of estimating the return. iDQN + $n$-step return is practically feasible and does not incur more memory or computational time over iDQN just like DQN + $n$-step return does compared to DQN. As explained in point $II$ of the general answer, the reason why we did not run iDQN + $n$-step return is simply because of the high computational cost coming from Atari games. Therefore, we do not consider $n$-step return to be a baseline but a possible addition to iDQN left for future work. This clarification would fit perfectly in Section 3.1 where we introduce other empirical Bellman operators. Line $98$, we cite Sutton (1988) [1] in which $n$-step return is introduced. We would gladly add this clarification to this paragraph in the revised version of the paper. [1] Sutton, R. S. Learning to predict by the methods of temporal differences. Machine Learning, 3(1): 9–44, August 1988. URL http://www.cs.ualberta.ca/~sutton/papers/sutton-88.pdf.
null
null
null
null
null
null
Hierarchical Decomposition of Prompt-Based Continual Learning: Rethinking Obscured Sub-optimality
Accept (spotlight)
Summary: This paper introduces a CIL approach that specifically addresses the use of pre-trained models. It is intriguing and significant to explore the most suitable CIL method for pre-trained models. Strengths: 1. CIL and how to do CIL in pre-trained models are very important 2. The writing is smooth and very easy to follow Weaknesses: Excluding the assumption of a pre-trained model, this paper reduces to the existing research (as mentioned in line 233). Thus, I recommend conducting further analysis on CL for pre-trained models. 1. Considering the extensive exploration of pre-trained models in NLP, it is crucial for this paper to compare its approach with NLP continual learning baselines, e.g., [1,2,3]. 2. I see no reason why the theory should be restricted to prompts alone. Other parameter-efficient tuning methods such as adapters, Lora, and Prefix can easily be utilized. The author should include a comparison with these methods (see 1 references). 3. In addition, I suggest comparing this paper's approach with TIL methods as well, which do not require TII. This would serve as an additional evaluation to assess the proposed method's performance in WTP and TAP. [1]: Achieving forgetting prevention and knowledge transfer in continual learning, NeurIPS 2021 [2]: Continual Pre-training of Language Models, ICLR 2023 [3]: Continual Learning of Natural Language Processing Tasks: A Survey. https://arxiv.org/abs/2211.12701 Technical Quality: 3 good Clarity: 3 good Questions for Authors: See above Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: I think the discussion of limitations in Sec. 6 makes sense. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your positive feedback and insightful comments. Below, we provide a point-to-point response to these comments and summarize the corresponding revisions in final version. **Q1: Considering the extensive exploration of pre-trained models in NLP, it is crucial for this paper to compare its approach with NLP continual learning baselines, e.g., [1,2,3].** A1: Thank you for providing these excellent related work. CTR [1] focused on a task-incremental setting, inserting continual learning modules (typically capsule networks) into two locations of a BERT-like pre-trained model. In contrast, our work focuses on a class-incremental setting without the oracle of task identity at test time, and employs prompts as the ``inserted modules''. DAS [2] focused on continual pre-training of language models to improve their end-task performance, i.e., upstream continual learning. In comparison, our work focuses on improving the performance for all sequentially arrived tasks ever seen, i.e., downstream continual learning. The third reference [3] is a recent survey on continual learning of NLP tasks. It has summarized representative continual learning strategies, in particular advanced parameter-efficient fine-tuning techniques for continual learning with pre-training. As you mentioned, our work focuses on improving prompt-based methods in this direction. As shown in the responses to your Q2 and Reviewer Qf6e's Q5, our work could potentially be generalized to other PEFT techniques, serving as an important future work. We will add the related work and the above discussion in the final version. **Q2: I see no reason why the theory should be restricted to prompts alone. Other parameter-efficient tuning methods such as adapters, Lora, and Prefix can easily be utilized. The author should include a comparison with these methods (see 1 references).** A2: Thank you for your valuable suggestion. We agree that other parameter-efficient fine-tuning (PEFT) techniques could be utilized with our theoretical framework (please refer to the response to Reviewer Qf6e's Q5). All of these PEFT techniques can serve as the task-specific parameters in our HiDe-Prompt, differing mainly in their specific forms. Specifically, prompt-tuning and prefix-tuning both prepended a few parameters to input / hidden vectors, which have been discussed in line 118-124. Adapters inserted adaptive parameters between layers, while LoRa learned an additional low-rank matrix to approximate weight updates and added them to the backbone weights. In fact, we have attempted to implement LoRa in our method, which can still achieve considerable performance (e.g., 87.86%/92.24% on Split CIFAR-100 under Sup-21K/iBOT-21K pre-training). We are now working on implementing other PEFT techniques, which will be released in future work. We will add the above discussion in the final version. **Q3: In addition, I suggest comparing this paper's approach with TIL methods as well, which do not require TII. This would serve as an additional evaluation to assess the proposed method's performance in WTP and TAP.** A3: Thank you for your valuable suggestion. As the task identity is provided at test time, TIL can largely reduce the difficulty of continual learning compared to CIL and is usually implemented with task-specific output layers. Therefore, the TIL-version of our framework retains only WTP but removes both TII and TAP (see this conclusion and a theoretical proof in Appendix A.3). In other words, the continual learning problem for our approach becomes evaluating the performance of prompt-tuning itself, i.e., the ability of learning each task with task-specific prompts, without inter-task interference. Since most TIL methods focused on overcoming catastrophic forgetting, they do not constitute a direct comparison with the TIL-version of our approach in terms of motivation. Besides, we empirically validate that the TIL-version of our approach achieves sufficiently high performance (e.g., 97.83%/97.83% and 81.55%/80.37% on Split CIFAR-100 and Split ImageNet-R under Sup-21K/iBOT-21K pre-training, respectively), consistent with the above analysis. We will add the above discussion in the final version. [1] Achieving forgetting prevention and knowledge transfer in continual learning, NeurIPS 2021 [2] Continual pre-training of language models, ICLR 2023 [3] Continual learning of natural language processing tasks: A survey. https://arxiv.org/abs/2211.12701 --- Rebuttal Comment 1.1: Title: Look forward to further feedback Comment: We thank you again for the valuable and constructive comments. We hope you may find our response satisfactory and raise your rating accordingly. We are looking forward to hearing from you about any further feedback. Best, Authors
Summary: This work provides a comprehensive analysis of state-of-the-art prompt-based approaches for continual learning with the use of pre-training. The authors empirically demonstrate a clear performance degradation of current strategies under realistic self-supervised pre-training and extensively analyze the exposed sub-optimality. The authors then provide an in-depth theoretical analysis of the continual learning objective in the context of pre-training, which can be decomposed into three hierarchical components, and propose an innovative approach to optimize them explicitly. Extensive experiments on various pre-training paradigms have demonstrated the clear advantages of the proposed method. Strengths: 1. This paper provides a well-organized formulation of state-of-the-art prompt-based approaches, assessing them as a unified perspective. 2. The empirical analysis reveals the degraded performance of prompt-based approaches under self-supervised pre-training, which is an important issue for practical applications. 3. The theoretical analysis is very interesting. The hierarchical components are specific to continual learning in the context of pre-training and apply to the settings of task-/domain-/class-incremental learning. 4. The proposed method allows for explicit optimization of the continual learning objective through adaptively leveraging pre-training and prompt architectures. Experimental results demonstrate a significant improvement in continual learning performance under different pre-training paradigms. Weaknesses: 1. Based on proofs in supplementary materials, is the notation $\bar{c}$ in Eq.(8) equal to $y$? Please check it. 2. Could the authors further discuss some concurrent related work such as PromptFusion [1] that exploits prompt and FSA [2] that exploits FiLM adapters for continual learning with pre-training. [1] PromptFusion: Decoupling Stability and Plasticity for Continual Learning. arXiv preprint arXiv:2303.07223. [2] First Session Adaptation: A Strong Replay-Free Baseline for Class-Incremental Learning. arXiv preprint arXiv:2303.13199. 3. Although experimental results have been sufficient enough, it would be better if some fine-grained datasets can be further analyzed, such as CUB. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Some minor concerns remain to be addressed. Please refer to the Weakness. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 3 good Contribution: 4 excellent Limitations: The authors have properly discussed the limitations and potential negative societal impacts. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your positive feedback and insightful comments. Below, we provide a point-to-point response to these comments and summarize the corresponding revisions in final version. **Q1: Based on proofs in supplementary materials, is the notation $\bar c$ in Eq.(8) equal to $y$? Please check it.** A1: Yes, as you understand it, the notation $\bar c$ in Eq.(8) is equal to $y$, representing the ground truth label of $x$. For clarify, we will replace $\bar c$ with $y$ and add more explanations in the final version. **Q2: Could the authors further discuss some concurrent related work such as PromptFusion [1] that exploits prompt and FSA [2] that exploits FiLM adapters for continual learning with pre-training.** A2: Thank you for pointing out these excellent related work. We will include a discussion on them in the final version. Briefly, PromptFusion [1] employed two prompt-based models to optimize stability and plasticity, respectively, and combined their predictions in a weighted average. The two prompt-based models are constructed from a pre-trained ViT and an additional CLIP, and the combination of their predictions relies on a replay buffer of old training samples. In contrast, our work focuses on a rehearsal-free setting and only requires a pre-trained ViT, which is more resource-efficient and practical in applications. FSA [2] adapted a pre-trained backbone with FiLM adapters only in the first learning session and fixed it thereafter. In comparison, our work focuses on prompt-based techniques and adapts the backbone in all learning sessions, so as to accommodate subsequent changes in data distributions. **Q3: Although experimental results have been sufficient enough, it would be better if some fine-grained datasets can be further analyzed, such as CUB.** A3: Following your suggestion, we conducted an additional experiment on Split CUB-200-2011, i.e., a random split of its 200 classes into 10 tasks with 20 classes per task. The results are summarized as below: | PTM | L2P | DualPrompt | S-Prompt++ | CODA-Prompt | HiDe-Prompt | | -------- | ---------- | ----------- | ----------- | ----------- | ----------- | | Sup-21K | 74.48 | 82.05 | 82.08 | 74.34 | **86.56** | | iBOT-21K | 44.29 | 41.31 | 42.73 | 47.79 | **78.23** | For continual learning of fine-grained classification tasks, the sub-optimality of representative prompt-based approaches is more clearly exposed under self-supervised pre-training (i.e., Sup-21K vs iBOT-21K). In comparison to these baselines, HiDe-Prompt (ours) achieves a substantial lead in performance (more than **30\%**). Through an extensive ablation study, we observe that this is largely due to the optimization of task-adaptive prediction (TAP) through replaying pseudo representations to adapt the final output layer, as the fine-grained classification generally requires a high degree of precision in classification outputs. Therefore, these results further demonstrate the importance of our empirical and theoretical contributions. We will add it in the final version. [1] PromptFusion: Decoupling Stability and Plasticity for Continual Learning. arXiv preprint arXiv:2303.07223. [2] First Session Adaptation: A Strong Replay-Free Baseline for Class-Incremental Learning. arXiv preprint arXiv:2303.13199. --- Rebuttal Comment 1.1: Title: Thank you for your rebuttal. Comment: I appreciate the authors’ effort to fully address my concerns. I have also read the rebuttal to other reviewers. The generality of the proposed method to other PEFT strategies (e.g., Lora) for continual learning is impressive. I believe this work could be influential in continual learning since efficient fine-tuning is very important for avoiding forgetting. --- Reply to Comment 1.1.1: Title: Thank you Comment: We are happy to know that the reviewer recognizes the contribution and broad impact of our work. We also appreciate the positive feedback and strong support. Best, Authors
Summary: This paper studies application of prompts in pre-trained models for continual learning. Building upon the problem of [1], the authors introduce the Task-Adaptive Prediction (TAP) for the CIL problem using pre-trained networks. They demonstrate that a good TAP, WTP, and TII performances are necessary and sufficient for a good CIL model. The authors emphasize the significance of these factors, especially in the context of continual learning with pre-trained self-supervised model. [1] Kim et al., Theoretical study on solving continual learning. NeurIPS 2022 Strengths: - The paper is easy to follow and the proposed approach is well motivated - The paper has provided theoretical insights - The proposed method is much stronger than the existing baselines Weaknesses: - The method heavily relies on pre-trained models trained with ImageNet. This can be an issue, especially for supervised pre-training, since ImageNet already contains classes similar or identical to classes used for CL and there could be information leak from the pre-training classes to continual learning classes [2]. - The theoretical contribution is somewhat ambiguous since the decomposition and theorem 1 and 2 are similar to [1]. [2] Kim et al., A multi-head model for continual learning via out-of-distribution replay. CoLLAs, 2022. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - I couldn’t fully-understand why it’s necessary to have TAP in the CL problem using pre-trained networks, considering that it wasn’t required when training from scratch [1]. - I am curious about the performance of the proposed method when the feature extractor is pre-trained using only the classes that are dissimilar to CL classes as [2] - Is the theoretical analysis only relevant to the prompt-based method? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Refer to Weaknesses and Questions Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your positive feedback and insightful comments. Below, we provide a point-to-point response to these comments and summarize the corresponding revisions in final version. **Q1: The method heavily relies on pre-trained models trained with ImageNet. This can be an issue, especially for supervised pre-training, since ImageNet already contains classes similar or identical to classes used for CL and there could be information leak from the pre-training classes to continual learning classes [2].** A1: In this work, we follow the pre-training dataset from previous work of prompt-based continual learning and focus on the impact of pre-training paradigms (especially *self-supervised pre-training*). Following your suggestion in Q4, we perform an additional experiment of using only the classes that are dissimilar to downstream CL classes for pre-training [2]. In this experimental setup, our approach achieves a more significant lead in performance (19.05\% on Split CIFAR-100), which is further detailed in our response to Q4. We will add these results in the final version. **Q2: The theoretical contribution is somewhat ambiguous since the decomposition and theorem 1 and 2 are similar to [1].** A2: In this work, we focus on continual learning with pre-training, especially prompt-based continual learning that has recently received significant attention in this direction. Our theoretical contribution lies in the context of **pre-training**, where we demonstrate the necessary conditions to achieve good continual learning performance. This is clearly different from the theorem in [1] about continual learning from scratch. Specifically, the major differences between ours and [1] include: (1) The condition is different. We formulate the decomposed components (i.e., TII, WTP and TII) as $\theta$-conditional probabilities (i.e., $P(\boldsymbol{x} \in \mathcal{X}{i}|\mathcal{D},\theta)$, $P(\boldsymbol{x} \in \mathcal{X}{i,j}|\boldsymbol{x} \in \mathcal{X}{i},\mathcal{D},\theta)$ and $ P(\boldsymbol{x} \in \mathcal{X}^{c}|\mathcal{D},\theta)$), where $\theta$ captures the pre-trained knowledge while not considered in [1]. (2) Due to the additional introduction of $\theta$, the necessary conditions to achieve good continual learning performance are different from that in [1]. Beside WTP and TII, TAP is especially necessary in the CL problem using pre-training (please refer to A3 for more details). We will add more explanations to make it clearer. **Q3: Why it’s necessary to have TAP in the CL problem using pre-trained networks, considering that it wasn’t required when training from scratch [1].** A3: As stated in A2, from the theoretical perspective, TAP is equivalent to TII with WTP when training from scratch. That means $ P(\boldsymbol{x} \in \mathcal{X}^{y}|\mathcal{D},\theta) = P(\boldsymbol{x} \in \mathcal{X}{\bar{i}}|\mathcal{D},\theta)P(\boldsymbol{x} \in \mathcal{X}{\bar{i},\bar{j}}|\boldsymbol{x} \in \mathcal{X}{\bar{i}},\mathcal{D},\theta) $, corresponding to $\delta +\epsilon = \eta$ in our Theorem 1. Therefore, it is unnecessary to have TAP in CIL when training from scratch, which is consistent to [1] as discussed in line 231-233 of our manuscript. On the other hand, when using pre-training with the hierarchical framework as shown in Fig. 3, TAP and TII with WTP are formulated as two different prediction problems where $ P(\boldsymbol{x} \in \mathcal{X}^{y}|\mathcal{D},\theta) \neq P(\boldsymbol{x} \in \mathcal{X}{\bar{i}}|\mathcal{D},\theta)P(\boldsymbol{x} \in \mathcal{X}{\bar{i},\bar{j}}|\boldsymbol{x} \in \mathcal{X}{\bar{i}},\mathcal{D},\theta) $. Thus, the final performance of having TAP in CIL (i.e., $\max [P(\boldsymbol{x} \in \mathcal{X}{\bar{i},\bar{j}}|\mathcal{D},\theta),P(\boldsymbol{x} \in \mathcal{X}^{y}|\mathcal{D},\theta)]$) can outperform that without TAP (i.e., $P(\boldsymbol{x} \in \mathcal{X}{\bar{i},\bar{j}}|\mathcal{D},\theta)$). These results theoretically demonstrate that the use of pre-training can indeed improve continual learning compared to training from scratch. We will make it clearer. **Q4: The performance of the proposed method when the feature extractor is pre-trained using only the classes that are dissimilar to CL classes as [2].** A4: Following the setup in [2], we use a pre-trained checkpoint of the ImageNet subset (with 389 similar classes removed) for continual learning of Split CIFAR-100. The final average accuracy (FAA) of S-Prompt++, CODA-Prompt and our approach is 69.00%, 65.07% and 88.05%, respectively. As can be seen, the performance lead of our approach becomes significantly much larger in this experimental setup, thanks to the explicit optimization of hierarchical components to overcome sub-optimal aspects in prompt-based continual learning. **Q5: Is the theoretical analysis only relevant to the prompt-based method?** A5: Indeed, our theoretical analysis could be extended as a general framework of parameter-efficient fine-tuning (PEFT) for continual learning with pre-training. Specifically, based on our theoretical analysis, the objective of continual learning is achieved by three components: (1) optimization of WTP with task-specific parameters, (2) optimization of TII with uninstructed representations, and (3) optimization of TAP with instructed representations. Mainstream PEFT techniques are applicable to this framework in general. The differences lie only in the form of task-specific parameters used in (1), which could be prompt, adapter, LoRA, FiLM, etc. In fact, we have attempted to implement LoRa in our method, which can still achieve considerable performance (e.g., 87.86%/92.24% on Split CIFAR-100 under Sup-21K/iBOT-21K pre-training). Since the major focus of this paper is to improve prompt-based continual learning, which is one of the most active technical routes in this direction, we leave the extension of other PEFT techniques as well as an empirical comparison to further work. We will add it in the final version. --- Rebuttal Comment 1.1: Title: Look forward to further feedback Comment: We thank you again for the valuable and constructive comments. We hope you may find our response satisfactory and raise your rating accordingly. We are looking forward to hearing from you about any further feedback. Best, Authors
Summary: The authors provide strong empirical analysis on existing "prompting for continual learning" papers. They propose a new hierarchical prompting method that includes several components which take advantage of unstructured data representations. The authors not only propose an interesting approach with SOTA performance, but they have very detailed experiments using several pre-training backbones and datasets. Strengths: In general, I feel that for this subject area (which I am sure will have many neurips submissions), this paper is overall high quality. 1) I appreciate another "prompting for continual learning" paper that starts with insights and findings into existing methods, laying an intuitive foundation for the proposed approach. 2) With good intuitive and theoretical backing that seems to reasonable, the method has great performance gains over SOTA. 3) Nice, detailed analysis and ablation study. I checked carefully to see if maybe there was a "hidden" component that does most of the heavy lifting of the method, but the contribution components seem to all be impactful in some way. 4) I am incredibly thankful to see the experiments conducted on several different backbones. This code and framework will be a fantastic contribution on its own, along with the analysis, before even considering the method. As someone who has experience in this problem setting, I think open-source code for this project will be greatly appreciated. Weaknesses: 1) Task-id prediction for class-incremental learning on random splits of classes (i.e., where there is no actual task structure), seems a red flag to me that the pre-trained backbone is too strong for the proposed task datasets, and that it is actually taking advantage of the unfair backbone in order to "circumvent" class-incremental learning, instead posing it as the much easier task-incremental learning problem. However, this weakness also exists in the competing methods, and the strengths of the paper out-weigh this weakness imo. 2) Figure 2 c and d is very confusing. Which method is this for? If you are making strong claims about task specificity, should it not be done for all methods? 3) This is not relevant to your findings, and I am not requesting any experiments, but Coda-prompt might perform better at a lower constant learning rate if the high, constant learning rate is hurting its performance. Learning rate decay is also not a "trick", either - it is a valid hyperparameter choice and important for many representation learning methods. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: a) See weakness 2 b) what is the computation cost of your methods compared to the baselines? I feel that even with a strong training time cost, I am happy to vote for paper acceptance. Just want to ensure transparency. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: Discussed reasonable limitations - there is no obvious potential negative societal impact Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your positive feedback and insightful comments. Below, we provide a point-to-point response to these comments and summarize the corresponding revisions in final version. We are pleased that our implementation code is appreciated. It will be published after acceptance. **Q1: Task-id prediction for class-incremental learning on random splits of classes...it is actually taking advantage of the unfair backbone in order to "circumvent" class-incremental learning, instead posing it as the much easier task-incremental learning problem.** A1: We agree that the pre-trained backbones considered in this work as well as most prompt-based continual learning papers are adequately strong for downstream tasks. In fact, this experimental setup is **complementary** to the commonly-used protocol of class-incremental learning (CIL), which learns incremental classes from scratch or relatively weak pre-training (i.e., learning half of the classes in the first stage). As large-scale pre-training has proven to greatly facilitate a variety of downstream tasks, how to exploit it effectively in continual learning is of increasing interest. Our work demonstrates that CIL has distinctive results with relatively strong pre-training (e.g., more tolerance to the errors in task-identity inference), which extends previous explorations of CIL. On the other hand, the CIL in our experiments remains clearly **different** from TIL. As shown in the response to Reviewer UkQY's Q3, the performance of TIL is remarkably better than that of CIL. This is because the access to task identity enables TIL to avoid inter-task interference via multi-head output layers, while CIL usually requires a well-adapted single-head output layer to avoid errors from predicting an incorrect task identity. In fact, the relationship between CIL, TIL and pre-training has been shown in our theoretical analysis, i.e., a cross-talk between within-task prediction (WTP), task-identity inference (TII) and task-adaptive prediction (TAP). Removing the significant impact of pre-training would degenerate our framework into regular CIL, i.e., only WTP and TII remain (see line 231-234). As for TIL, the task identity is provided for multi-head evaluation and thus only WTP remains (see Appendix A.3 and Reviewer UkQY's A3). Besides, we have evaluated our approach on 5-Dataset, a benchmark with **actual task structure** (see line 287-289), and it can still achieve a significant performance lead (see Appendix Table 6 and line 602-624). We will add the above discussion in the final version. **Q2: Figure 2 c and d is very confusing. Which method is this for? If you are making strong claims about task specificity, should it not be done for all methods?** A2: In Fig. 2c, we analyze the instructed representations of task-specific prompts, which are identical to the prompt architecture of S-Prompt++ (see line 181-182, 164-168). In Fig. 2d, we analyze the ability of uninstructed representations to predict task identity, corresponding to the same method in Fig. 2c. For all the prompt-based methods considered in Fig. 1, since the training set of each task is provided sequentially, the optimizable parameters for each task inevitably acquire task-specific knowledge and the corresponding knowledge needs to be invoked correctly at test time. However, due to the complexity of their prompt architectures (e.g., L2P, DualPrompt and CODA-Prompt all reuse some prompt parameters optimized for previous tasks), task specificity is difficult to demonstrate explicitly in experiments as it is with S-Prompt++. Therefore, we construct such a demo experiment to analyze task specificity in prompt-based continual learning. In fact, the objective of all methods is tantamount to optimizing the probability distribution on the left side of Eq.(5), which can be decomposed into the two probabilities on the right side corresponding to Fig. 2c and 2d, respectively. Therefore, our demo experiment is representative for prompt-based methods from a theoretical perspective. We will add more descriptions and explanations to make it clearer. **Q3: Coda-prompt might perform better at a lower constant learning rate if the high constant learning rate is hurting its performance. Learning rate decay is also not a "trick", either - it is a valid hyperparameter choice and important for many representation learning methods.** A3: The main results of CODA-Prompt (i.e., Table 1) are produced from **using a lower learning rate with cosine decay** (see line 300), which is the same as its original paper and indeed performs better than a higher constant learning rate (see Fig. 2a, 2b and Appendix Table 5). We have also performed some experiments to validate that using such a lower learning rate with cosine decay is slightly better or comparable to using a lower constant learning rate (see Appendix Table 5). For example, the final average accuracy (FAA) of CODA-Prompt with a learning rate of 0.005 (constant), 0.001 (constant) and 0.001 (cosine decay) on Split CIFAR-100 under Sup-21K pre-training is 85.92%, 86.78%, and 86.94%, respectively. We agree that the learning rate decay is a valid hyperparameter choice for deep learning methods. We will modify our claim appropriately and add more explanations to avoid potential misunderstanding. **Q4: What is the computation cost of your methods compared to the baselines?** A4: Using the same single-card A100 GPU, the training times of L2P, DualPrompt, S-Prompt++, CODA-Prompt and our approach are 0.55h, 2.00h, 2.01h, 2.08h, and 2.80h on Split CIFAR-100, respectively. We observe that L2P requires a much smaller epoch number for convergence but performs the worst in general. Compared to other strong baselines, the computation cost of our approach is comparable in order of magnitude. In practice, we include parallel training in our implementation code, which can largely reduce the training time. We will add the above results in the final version. --- Rebuttal Comment 1.1: Title: Thank you for the rebuttal Comment: I would like to thank the authors for answering my concerns and questions. I will remain at a score of 7 and recommend this paper be accepted to NeurIPS 2023. Congratulations on the great work. --- Reply to Comment 1.1.1: Title: Thank you Comment: Thank you so much for your support. We appreciate it. Best, Authors
Rebuttal 1: Rebuttal: We thank all reviewers for their great efforts and constructive comments, which help us to further improve the manuscript. We have tried our best to address these comments with additional experiments, explanations and discussions. Please let us know if you have any further questions. Pdf: /pdf/77581a1e905fd49c516dcd9dea83c17b59508673.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: The paper suggests a new prompt-based continual learning method by leveraging combinatorial objectives with *within task prediction*, *task-identity inference*, and *task-adaptive prediction*. The paper first summarizes recent prompt-based continual learning techniques and demonstrates their unstable performance according to the pre-trained backbones (particularly, initializing with self-supervised representation degrades their performance significantly). And then, proposes a new regularization technique containing core components, including ensemble prompting, contrastive regularization, and TAP loss. Strengths: The paper provides a comprehensive analysis and theoretical proof of the motivation. And the suggested idea is reasonable, and the methodological design is also clear. In experiments, the proposed method consistently surpasses strong baselines in terms of conventionally used continual learning metrics under multiple pre-trained backbones, and its improvement and stability are a bit impressive. Further, they provide the necessary ablation study and analysis in the main paper and appendix. Weaknesses: The effect of the prompt ensemble in the method is not discussed. And the different behavior and impact of un-/instructed representation for continual learning are partially discussed in Figure 5, but I recommend a more comprehensive and detailed discussion/analysis. Technical Quality: 3 good Clarity: 3 good Questions for Authors: N/A Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Please see the weakness. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your positive feedback and insightful comments. Below, we provide a point-to-point response to these comments and summarize the corresponding revisions in final version. **Q1: The effect of the prompt ensemble in the method is not discussed.** A1: We would respectfully point out that the effect of the prompt ensemble has been discussed in experiments, referred to as "WTP" in Table 2 and Fig. 4a. As shown in line 248-255, the prompt ensemble is proposed to learn each task more effectively with the architecture of task-specific prompts, so as to improve within-task prediction (WTP). In comparison to the "naive architecture" that employs only task-specific prompts, our prompt ensemble strategy can largely improve the performance of learning each new task (Fig. 4a) and thus improve the average performance of all tasks (Table 2), consistent with the motivation of our design. We will add more explanations to make it clearer. **Q2: The different behavior and impact of un-/instructed representation for continual learning are partially discussed in Figure 5, but I recommend a more comprehensive and detailed discussion/analysis.** A2: Thank you for your valuable suggestion. We have performed an extensive visualization of un-/instructed representations in terms of continual learning methods and pre-training paradigms. Fig. 5 reports the representations of our approach under iBOT-21K pre-training, and the results of Sup-21K are further included in the rebuttal PDF. From these results, we provide a more comprehensive analysis as below: (1) The uninstructed representations have shown single-peaked patterns in general, thanks to the use of adequate pre-training. This property allows them to be approximated as Gaussian distributions for preservation and recovery, and allows for correct prediction of task identity from them as shown in Fig. 4b. (2) The instructed representations tend to be more compact and distinguishable, validating the effectiveness of prompt-based continual learning. The degree of compactness and differentiation varies across continual learning methods and pre-training paradigms, contributing to their performance differences in Table 1. (3) The differences in compactness and differentiation between un-/instructed representations suggest that the design of coarse-grained classification by task and fine-grained classification by class is reasonable for prompt-based continual learning, as do our theoretical analysis and the proposed method. We will add the above discussion with more visualization results in the final version. --- Rebuttal Comment 1.1: Title: Reviewer response Comment: Thank you so much! After sincerely reading the author's rebuttal, all concerns are solved! I'm happy to keep my initial score. Congratulation on publishing strong work in the continual learning field! --- Reply to Comment 1.1.1: Title: Thank you Comment: We thank the reviewer for finding our response satisfactory and are happy to know the very positive rating. Best, Authors
null
null
null
null
null
null
Large Language Models for Automated Data Science: Introducing CAAFE for Context-Aware Automated Feature Engineering
Accept (poster)
Summary: The paper proposes CAAFE, a Context aware automated feature engineering approach to support auto ML by utilizing Large Language Models (LLMs). Proposed method generates new features or transforming the feature space from tabular data by using LLMs and incorporate them to train new models. Transforming the feature space can be an iterative process where in each step LLM generates code that is run on training and evaluation sets, followed by fitting a new model and evaluating its performance. LLM receives a set of instructions through a prompt which includes semantic information about the data set, feature names, their data types, useful stats about features, and a few sample values. The paper reports experimental results on two sets of data sets: 1- OpenML data sets published before Sept 2021 (GPT 3.5 and 4 training date); 2- Kaggle data sets published after Sept 2021. Results show that CAAFE can improve the performance of a state of the art classifier. They also show that CAAFE can help build models that outperform models built with the help of other state of the art automatic feature generation. Strengths: Although the seminal idea proposed in the paper is rather straightforward, it's novelty makes it a worthwhile body of research for consideration. - originality: The work is novel and original. It introduces the use of LLMs for auto ML through adding contextual information which is less explored. - quality: The paper is technically sound and experimental results support the claims. Limitation and risks of the work are also clearly called out. - clarity: The paper is easy to read and follow and the contributions are discussed clearly. - significance: Experimental results show LLM's usefulness for feature engineering aspect of AutoML which testifies the significance of the work. For more detailed comments on submission's significance refer to the limitation section. Weaknesses: - Alternative feature generation methods that are considered in the experiments are limited. AutoFE is the main alternative method. Other techniques such as PCA or deep sparse feature learning (authors have mentioned that with small data sets they don't expect much benefit from deep learning methods but it would be interesting to emperically validate this hypothesis for the studied data sets) is not considered. - The human-in-the-loop aspect of CAAFE is not fully discussed. For example, one of the implications of having human in the loop is correcting the potential mistakes that LLM may make. In the paper it is not discussed how many times the generated code by LLM needed to be revised by human, not necessarily due to run time errors but because of the semantic issues in the generated features. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - How do you make sure the generated feature makes sense? e.g. https://github.com/cafeautomatedfeatures/CAFE/blob/main/data/generated_code/airlines_v3_0_code.txt why Euclidean distance between airports even make sense? Based on the data set description it is not clear if from or to airport attributes represent distance to a canonical point. - Minor, line 195, please fix the question marks in "Table ??". Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: The main limitation of the work is stated by authors as "We focus on small datasets with up to 2,000 samples in total, because feature engineering is most important and significant for smaller datasets". As stated, the proposed approach is mainly beneficial for smaller data sets which makes the method less useful for a lot of real world and specially industrial applications. Along the same lines, as clearly mentioned by author(s), another limit of the work is the possibility of exceeding the prompt maximum length. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We deeply appreciate the time and effort taken by you to evaluate our paper. We've carefully considered each point raised and aim to address them comprehensively below. For this year's NeurIPS revisions, it is not possible to upload a modified paper, but only a one-page rebuttal PDF. Hence, we've outlined the changes we'll implement in the final paper within our responses. [Weaknesses] *[1] Alternative feature generation methods that are considered in the experiments are limited. AutoFE is the main alternative method. Other techniques such as PCA or deep sparse feature learning (authors have mentioned that with small data sets they don't expect much benefit from deep learning methods but it would be interesting to emperically validate this hypothesis for the studied data sets) is not considered.* We evaluated more baselines: (1) AutoFE methods: FETCH and OpenFE (2) AutoML methods: Autosklearn and Autogluon (Which perform feature engineering as part of their pipeline. Here, the effect of additional AutoFE should be smaller and could be negative). Autosklearn e.g. also includes PCA among others for feature engineering. Please see the first point in our general rebuttal, as well as Figure 2 in the rebuttal PDF for detailed results You can find detailed results in Table 2 (one-page rebuttal PDF) in the PDF. TLDR: CAAFE (GPT-4) + TabPFN is strongest among all methods and adds performance to AutoML methods. Using TabPFN as a classifier, a critical difference diagram shows statistical significance of CAAFE to all baselines. For FETCH, we could only evaluate it optimized for Logistic Regression due to the large computational cost (up to 24h / dataset / seed).. *[2] The human-in-the-loop aspect of CAAFE is not fully discussed. For example, one of the implications of having human in the loop is correcting the potential mistakes that LLM may make. In the paper it is not discussed how many times the generated code by LLM needed to be revised by human, not necessarily due to run time errors but because of the semantic issues in the generated features.* We mention that human-in-the-loop AutoML could be possible since semantic explanations can be more interpretable. An empirical evaluation of a human-in-the-loop concept would require running experiments with a cohort of users interacting with our algorithm. With our resources this is not feasible in terms of cost and time. This could be a followup to our work, likely coming from the industry, if commercialization is a goal. [Questions] *[1] How do you make sure the generated feature makes sense? e.g. https://github.com/cafeautomatedfeatures/CAFE/blob/main/data/generated_code/airlines_v3_0_code.txt why Euclidean distance between airports even make sense? Based on the data set description it is not clear if from or to airport attributes represent distance to a canonical point.* This file has been generated by GPT-3.5 which produces less meaningful, coherent and empirically useful features (All files with "v3" in the name are generated by GPT-3.5, "v4" is GPT-4 - which we did not mention!). Evaluating if generated features are semantically meaningful is hard, since such an evaluation itself would be subjective. Once again, it would require a cohort of raters, judging subjectively. We do report examples in our paper and in the supplements and address in our limitations that there is no guarantee for features to be meaningful - however, looking at a few examples you quickly see that many are - while not perfect this is a significant leap forward to classical feature engineering. We added the following to our limitations: "LLMs, at times, exhibit a phenomenon known as "hallucinations.", where models produce inaccurate or invented information. Within CAAFE, this might result in the generation of features and associated explanations that appear significant and are logically presented, even though they may not be grounded in reality. Such behavior can be problematic, especially when individuals place trust in these systems for essential decision-making or research tasks." *[2] Minor, line 195, please fix the question marks in "Table ??".* Done, thank you! :) --- Rebuttal Comment 1.1: Comment: I'd like to thank the authors for their responses to my questions/comments. Great to see the experimental results from considering more baselines.
Summary: This paper presents Context-Aware Automated Feature Engineering (CAAFE) for integrating domain knowledge into the AutoML process using LLMs. CAAFE automates feature engineering for tabular datasets, generating Python code that generates semantically meaningful features based on the dataset and a textual description of the data. The study demonstrates the effectiveness of CAAFE on a range of benchmark tasks. Strengths: - Well written and easy to follow. Thank you! - Simple and very applicable to real-world scenarios! Weaknesses: - If set up correctly, the method really can't fail. Features are only added, if they improve the objective. Thus the scientific value is somewhat limited to showing that LLMs can create features that improve performance (expected) and the extent to which (here is where I see most value). Therefore comparison to other feature engineering methods seems really important and could be more extensive. - Fig 2 is not clear. Is this a flow from left to right? How does the bottom part relate to the boxes? Please explain better. Suggestions: - I don't think the phrase "code as interface" is self-explanatory. Maybe can be introduced explicitly - In section 2.1., the sentence "GPT-4 is a deep neural network that uses a transformer architecture" - may not actually be correct. It is rumoured that GPT4 consists of multiple models and uses some mixture of experts methodology. In any case, this information is not published which might be worthwhile pointing out. - l195, table reference is missing Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: - What about the dataset where performance does not improve or even weaken? Is there an explanation. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 2 fair Limitations: - The main limitation is more extensive study of competing approaches. In essence the authors work really doesn't answer any open scientific questions. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We deeply appreciate the time and effort taken by you to evaluate our paper. We've carefully considered each point raised and aim to address them comprehensively below. Thank you for appreciating the paper presentation! We would like to address your questions and our changes in response to them in detail below. We would also appreciate you reading through the general reviewer feedback, which outlines the changes we made and might address open questions that go beyond what you have asked for. For this years NeurIPS revisions, it is not possible to upload a modified paper, but only upload a one page rebuttal with additional figures. When changes in text of the main paper were made, we replied these changes to each reviewer individually. We were a bit surprised at the low contribution score - we believe this paradigm of introducing context / semantic into automated machine learning can be quite meaningful and it is highly original (Reviewer uCcq: "​​Although the seminal idea proposed in the paper is rather straightforward, it's novelty makes it a worthwhile body of research for consideration.", Reviewer Y55o: "If I'm not mistaken, the first positive showing of LLMs in tabular data", viewer ndjC: "To the best of my knowledge, this is the first work to employ LLMs (GPT) for automating feature engineering.") [Weaknesses] *[1] "If set up correctly, the method really can't fail. Features are only added, if they improve the objective. Thus the scientific value is somewhat limited to showing that LLMs can create features that improve performance (expected) and the extent to which (here is where I see most value)."* A featurization is only validated on the training data, but performance is measured on separate test data. Overfitting can occur in the same way as it does in training a model - a featurization can be viewed as just another step in the model construction. When the number of provided samples is small or the number of tested featurizations is large, the risk of overfitting becomes larger. By considering semantically meaningful features CAAFE contains a prior for features that are more likely to generalize to the test set. Also, it reduces computational complexity by considering useful features more quickly. Looking at the generated code, these featurizations can contain up to 9 features at the same time - imagine the likelihood of such a feature when randomly permuting. *[2] "Therefore comparison to other feature engineering methods seems really important and could be more extensive."* We evaluated more baselines: (1) AutoFE methods: FETCH and OpenFE (2) AutoML methods: Autosklearn and Autogluon (Which perform feature engineering as part of their pipeline. Here, the effect of additional AutoFE should be smaller and could be negative). You can find detailed results in Table 2 (one-page rebuttal PDF) in the PDF. TLDR: CAAFE (GPT-4) + TabPFN is strongest among all methods and adds performance to AutoML methods. Using TabPFN as a classifier, a critical difference diagram shows statistical significance of CAAFE to all baselines. For FETCH, we could only evaluate it optimized for Logistic Regression due to the large computational cost (up to 24h / dataset / seed). *[3] Fig 2 is not clear. Is this a flow from left to right? How does the bottom part relate to the boxes? Please explain better.* Thank you for the feedback! We removed the arrows and adapted the caption: "Figure 2: Data Science pipeline, inspired by De Bie et al. (2022). CAAFE allows for automation of semantic data engineering, while LLMs could provide even further automation: (1) Context specification is user driven (2) exploitation and data engineering can be automated through LLMs (3) model building can be automated by classical AutoML approaches". We hope this would make this diagram clearer. [Suggestions] *[1] I don't think the phrase "code as interface" is self-explanatory. Maybe can be introduced explicitly* We did so in our updated manuscript, thank you! 🙂 "We bridge the gap between LLMs and classical algorithms by using code as an interface between them: LLMs generate code that modifies input datasets, these modified datasets can then be processed by classical algorithms." *[2] In section 2.1., the sentence "GPT-4 is a deep neural network that uses a transformer architecture" - may not actually be correct. It is rumoured that GPT4 consists of multiple models and uses some mixture of experts methodology. In any case, this information is not published which might be worthwhile pointing out.* We did so in our updated manuscript, we write: "The architecture of GPT-4 is not published, it is likely based on a deep neural network that uses a transformer architecture [...]" *[3] l195, table reference is missing* Thank you, done [Questions] *[1] What about the dataset where performance does not improve or even weaken? Is there an explanation.* We believe an answer to this is contained in our reply to your first point of weaknesses. [Limitations] *[2] The main limitation is more extensive study of competing approaches.* We believe an answer to this is contained in our reply to your second point of weaknesses. --- Rebuttal Comment 1.1: Title: Thank you... Comment: Thank you for addressing reviewer comments in depths. Given the additional evaluation, I will adjust my overall rating to 7. --- Reply to Comment 1.1.1: Title: Thank you for your feedback Comment: Thank you for your feedback and intention to update your score. We truly appreciate the time and effort spent in reviewing our submission. You can update the score by clicking on "Edit Review" on your original review - we would very much appreciate that.
Summary: In this paper, the authors propose a feature engineering method that builds upon LLMs. Features are engineering in an iterative process of prompting the LLM to generate code for new features, evaluating the features with a ML model and generating a new prompt. Hence, the overall approach follows the common wrapper approach. In their empirical evaluation they find feature engineering via LLMs to outperform no feature engineering. Strengths: - To the best of my knowledge, this is the first work to employ LLMs (GPT) for automating feature engineering. - The general idea seems to work out quite nicely and compare comeptitively to other feature engineering methods and preferably over engineering no features. - The paper is well written and easy to follow. The overall presentation is really excellent - maybe due to the use of ChatGPT for the editing? Weaknesses: Although the idea in general is quite interesting, I have several doubts regarding this work. The first and foremost doubt is about how to sort this work into the already existing literature and whether it really establishes a new state of the art. In particular, I am thinking about the following two works which have been missed also during the discussion of related works: Li, Liyao, et al. "Learning a Data-Driven Policy Network for Pre-Training Automated Feature Engineering." The Eleventh International Conference on Learning Representations. 2022. Zhang, Tianping, et al. "OpenFE: Automated Feature Generation beyond Expert-level Performance." arXiv preprint arXiv:2211.12507 (2022). (https://openreview.net/forum?id=1H1irbEaGV) This might be due to these works being rather recent. However, preprints of these papers have already been available for some time. I believe a comparison to these methods is inevitable to prove the real benefit of LLMs and whether the semantic information of feature names is really needed to come up with better features. Another issue with the paper is that it claims that LLMs are taking advantage of the semantical information of a dataset, e.g., the names of the features etc. But this is not substantiated in the experiments nor visible in the explanations generated in the examples. Moreover, I am a little bit doubtful to what extent the explanations of an LLM can be meaningful when LLMs base the text generated on the likelihood of the next word. In particular, logical argumentation does not seem to be a strengths of GPT so far and the flaws in logical reasoning can be made easily visible. E.g. testing with a sentence like "The [doctor/nurse] yelled at the [doctor/nurse] because she was late. Who was late?" and asking for an explanation for the decision. Why should the explanations be more meaningful for datasets? Speaking about the semantics that are supposedly extracted from feature names, it would have been interesting to see whether this information is indeed used, e.g., by blinding feature names. Neither in the main paper nor in the supplement I could find experiments that aim at answering these questions. However, these experiments lacking mean an unproved claim made in the paper. Minor: - Reference to table in line 195 broken. Technical Quality: 2 fair Clarity: 4 excellent Questions for Authors: 1. How does CAAFE compare to the SOTA in automated feature engineering? 2. What makes CAAFE semi-automated? As far as I understood, CAAFE only requires an input by the user once in the beginning to give a description of the dataset. If this is the reason for semi-automation are not every AutoML tools out there semi-automated? 3. Do LLMs in CAAFE really leverage semantic information about the dataset? 4. As the authors already state in their Broader Impact Statement there is a certain risk that biases contained in LLMs transfer to the engineered features and that CAAFE then builds features based on these biases. Should not every feature engineering approach based on LLMs directly incorporate mechanisms to prevent to leverage such biases? Especially since such biases might be very subtle. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 4 excellent Contribution: 3 good Limitations: The authors prove a broader impact statement estimating what effect and potential negative societal impact there might be. Flag For Ethics Review: ['Ethics review needed: Discrimination / Bias / Fairness Concerns'] Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We deeply appreciate the time and effort taken by you to evaluate our paper. We've carefully considered each point raised and aim to address them comprehensively below. Thank you for appreciating the paper presentation - to answer your question about GPT-4 use: We did indeed use GPT-4 to prepare the work, which has been especially useful in creating and refining Latex plots! We mention this in the Acknowledgements section. For this year's NeurIPS revisions, it is not possible to upload a modified paper, but only a one-page rebuttal PDF. Hence, we've outlined the changes we'll implement in the final paper within our responses. ### Weaknesses 1. *You ask for further comparisons to FETCH and OpenFE.* We evaluated more baselines: (1) We additionally compared to the AutoFE methods FETCH and OpenFE, and (2) we additionally used the AutoML methods Autosklearn and Autogluon as base classifiers, which perform feature engineering as part of their pipeline. You can find detailed results in Table 2 (one-page rebuttal PDF) in the PDF. TLDR: We find that one of the two CAAFE variants is the best feature engineering method for all base classifiers in both mean and mean rank AUROC, but for logistic regression. Additionally, we find that CAAFE (GPT-4) + TabPFN is the strongest among all methods. Using TabPFN as a classifier, a critical difference diagram shows a statistically significant edge of CAAFE over all baselines. For FETCH, we could only evaluate it optimized for Logistic Regression due to the large computational cost (up to 24h / dataset / seed). 2. *"Another issue with the paper is that it claims that LLMs are taking advantage of the semantic information of a dataset, e.g., the names of the features etc. But this is not substantiated in the experiments"* We performed an additional ablation study to test this hypothesis. We blind feature names and dataset descriptions, i.e. CAAFE is applied to the datasets without contextual information. We find a strong drop in performance from an average AUROC of 0.822 to 0.8 over all datasets for GPT-4. Please, see Table 1 in the rebuttal PDF for detailed results. Looking at the generated features very strongly highlights the usefulness of semantic information. Consider e.g. the tic-tac-toe dataset: The generated features are based on up to 9 features - a useful combination of that many features is very hard to find with random search but given contextual information this feature is quickly discovered. 3. *"Moreover, I am a little bit doubtful to what extent the explanations of an LLM can be meaningful when LLMs base the text generated on the likelihood of the next word. [..]"* Generated explanations do not need to be complex, in many cases this can be a simple explanation of what the generated code does. This can already be quite useful to non-technical users. We published all generated features for you to inspect at https://github.com/cafeautomatedfeatures/CAFE/tree/main/data/generated_code. They show that generated feature explanations are often of high quality and include semantic information. This quality of the provided explanations often surprised us as well. We have in the meantime received a significant amount of feedback that highlights that this interpretability is going to be especially valuable to practitioners. Take the following generation for balance_scale as an example: https://github.com/cafeautomatedfeatures/CAFE/blob/main/data/generated_code/balance-scale_v4_0_code.txt Here the task is predicting if a scale tips. We do, however, want to address the issue of false explanations and to our limitations: LLMs, at times, exhibit a phenomenon known as "hallucinations", where models produce inaccurate or invented information. Within CAAFE, this might result in the generation of features and associated explanations that appear significant and are logically presented, even though they may not be grounded in reality. Such behavior can be problematic, especially when individuals place trust in these systems for essential decision-making or research tasks. ### Questions 1. *How does CAAFE compare to the SOTA in automated feature engineering?* We believe an answer is contained in our reply to your first point of weaknesses. 2. *What makes CAAFE semi-automated? As far as I understood, CAAFE only requires an input by the user once in the beginning to give a description of the dataset. If this is the reason for semi-automation are not every AutoML tools out there semi-automated?$ Our intention was the following: Classical AutoML deals with model selection and technical optimizations which do not require semantic information. The term AutoDS or Automated Data Science often refers to a more extensive automation of the Data Science stack, where AutoML just covers a portion. We chose the term semi-automated Data Science since we are not automating all of the data science stack, but another part of it, while highlighting that this is a step towards AutoDS, i.e. full automation of the DS pipeline. The naming might be confusing, however, and we are considering a slight change. 3. *Do LLMs in CAAFE really leverage semantic information about the dataset?* See the reply to Weakness 2. 4. *As the authors already state in their Broader Impact Statement there is a certain risk that biases contained in LLMs transfer to the engineered features and that CAAFE then builds features based on these biases. Should not every feature engineering approach based on LLMs directly incorporate mechanisms to prevent to leverage such biases? Especially since such biases might be very subtle.* We have formulated a detailed answer in our reply to the ethics reviewer Ce9G and refer you there for an in-depth discussion of biases as we only have limited space in each rebuttal post. --- Rebuttal Comment 1.1: Title: Re: Rebuttal by Authors Comment: The elaborate rebuttal is very much appreciated and my concerns regarding CAAFE were lowered by the authors' response and additional experiments. However, I, unfortunately, cannot see the ethics reviewer's comments and thus cannot judge to what extent these issues have been resolved. Yet, it is relatively hard to judge based on the aggregated means without any standard deviation whether the drop in performance is indeed substantial and outside the regime of the standard error. Furthermore, for GPT-3.5 it seems as if it cannot really leverage the semantic information of the feature descriptions at all? How can this be explained? --- Reply to Comment 1.1.1: Comment: Thank you very much for your reply! Unfortunately the ethics reviewer has not replied so far - we have provided a two part answer with examples and an improvement Broader impact section. *"Yet, it is relatively hard to judge based on the aggregated means without any standard deviation whether the drop in performance is indeed substantial and outside the regime of the standard error."* In our rebuttal PDF, we include Critical Difference Diagrams in Figure 3, which implement the Wilcoxon test with a Bonferroni multiple testing correction. This test corrects for multiple testing between many all our baselines and makes this part of the testing procedure. Critical difference (CD) diagrams are a powerful tool to compare outcomes of multiple treatments over multiple observations. For instance, in machine learning research we often compare the performance of multiple methods over multiple data sets (i.e., observations). A diagram like the one above concisely represents multiple hypothesis tests that are conducted over the observed outcomes. Before anything is plotted at all, the **Friedman test** tells us whether there are significant differences at all. If this test fails, we have not sufficient data to tell any of the treatments apart and we must abort. If, however, the test sucessfully rejects this possibility we can proceed with the post-hoc analysis. In this second step, a **Wilcoxon signed-rank test** tells us whether each pair of treatments exhibits a significant difference. Since we are testing multiple hypotheses, we must adjust the Wilcoxon test with Bonferroni's method. For each group of treatments which we can not distinguish from the **Bonferroni-adjusted Wilcoxon test**, we add a thick line to the diagram. *Furthermore, for GPT-3.5 it seems as if it cannot really leverage the semantic information of the feature descriptions at all? How can this be explained?* This highlights how much better GPT-4 is able to integrate semantic information. We see that GPT-3.5 often uses semantic information is a false way and hallucinates. See this example on the airplanes dataset, where it uses Airport IDs to measure distance - here the idea seems suitable to calculate distances but the details are missed: this information is not contained within the IDs. ``` # Usefulness: Distance between airports can be a useful feature for predicting flight delays. Longer distances may have more potential for delays due to weather, air traffic control, etc. # Input samples: 'AirportFrom': [225.0, 39.0, 5.0], 'AirportTo': [11.0, 7.0, 60.0] df['Distance'] = ((df['AirportFrom'] - df['AirportTo'])**2)**0.5 ```
Summary: This paper presents a novel approach to automated feature engineering utilizing large language models (LLMs). Authors propose to use LLMs to generate code for feature generation. In the proposed method CAAFE, LLM is given a prompt with dataset description and a task of writing code for feature generation, the new feature is accepted if the validation score improves after training an ML model on a dataset containing the new feature. This procedure is repeated. The paper demonstrates the effectiveness of the approach by testing on 10 public datasets from OpenML and 4 datasets from Kaggle (less likely to leak into the LLM training set) and slightly improving the overall classification accuracy of the strong tabular classification model TabPFN Strengths: - I find the idea of incorporating LLMs via code generation to tabular data automatic feature engineering clever and clean - Having interpretable features is a big plus and an advantage over prior automatic methods - If I'm not mistaken, the first positive showing of LLMs in tabular data (where using an LLM seem to improve the performance of an already strong tabular model) - Percussion taken to create a non-leaked test-set Weaknesses: The major issue (that's why the score is not as high as it could have been for me) is the lacking evaluation. Both in terms of baselines/complementary methods and datasets. - Authors compare and complement the proposed method with AutoFeat `[1]` and DFS `[2]`, while much more effective automatic feature engineering methods like FETCH `[3]` and OpenFE `[4]` exist. - Also, in the above-mentioned papers, the evaluations are much more extensive, including more datasets (also regression datasets) and base models. Implementing CAAFE in those benchmarks would be a very big plus for the validation of the proposed method. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: - What do the bold entries mean in the table? The standard deviation values are pretty large, but results are bolded. Were stat tests used? - Does performance saturate after 10 iterations? What is the reason for stopping there? - Do you update the prompt with the generated feature descriptions for new iterations (I think info is missing in the paper, but this seems important)? - Could you report some numerical quantification of features generated by CAAFE, like feature importance ranking? - Minor issues: - line 159: should probably cite TabPFN - line 195: broken reference Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 2 fair Presentation: 3 good Contribution: 3 good Limitations: - I think the fact that method requires dataset and feature descriptions is a limitation - Looking at the generated code, some features and their usefulness explanations are, almost surely, hallucinations. I think It's important to mention hallucinations in the limitations section to represent this. **References** - `[1]` Franziska Horn, Robert Pack, and Michael Rieger. The autofeat python library for automatic feature engineering and selection. - `[2]` James Max Kanter and Kalyan Veeramachaneni. Deep feature synthesis: Towards automating data science endeavors - `[3]` Li, Liyao, et al. "Learning a Data-Driven Policy Network for Pre-Training Automated Feature Engineering." - `[4]` Zhang, Tianping, et al. "OpenFE: Automated Feature Generation beyond Expert-level Performance." Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We deeply appreciate the time and effort taken by you to evaluate our paper. We've carefully considered each point raised and aim to address them comprehensively below. We are especially happy that you appreciate the interpretability of our generated features. We have in the meantime received lots of feedback, that this point is going to be especially valuable to practitioners. We would also appreciate you reading through the general reviewer feedback, which outlines the changes we made and might address open questions that go beyond what you have asked for. For this year's NeurIPS revisions, it is not possible to upload a modified paper, but only a one-page rebuttal PDF. Hence, we've outlined the changes we'll implement in the final paper within our responses. [Weaknesses] *[1] "Authors compare and complement the proposed method with AutoFeat [1] and DFS [2], while much more effective automatic feature engineering methods like FETCH [3] and OpenFE [4] exist."* We evaluated more baselines: (1) AutoFE methods: FETCH and OpenFE (2) AutoML methods: Autosklearn and Autogluon (Which perform feature engineering as part of their pipeline. Here, the effect of additional AutoFE should be smaller and could be negative). You can find detailed results in Table 2 (one-page rebuttal PDF) in the PDF. TLDR: CAAFE (GPT-4) + TabPFN is strongest among all methods and adds performance to AutoML methods. Using TabPFN as a classifier, a critical difference diagram shows statistical significance of CAAFE to all baselines. For FETCH, we could only evaluate it optimized for Logistic Regression due to the large computational cost (up to 24h / dataset / seed). *[2] "Also, in the above-mentioned papers, the evaluations are much more extensive, including more datasets (also regression datasets) and base models. Implementing CAAFE in those benchmarks would be a very big plus for the validation of the proposed method."* For CAAFE we have a special set of requirements which makes its evaluation more challenging than previous approaches: (1) CAAFE requires meaningful dataset descriptions, which are not available for all of these previous datasets (2) as stated in our limitations, the cost of applying CAAFE rises linearly with the number of features in a dataset, thus we focus on datasets with less than 20 features. We believe that contextual prediction problems, where a dataset is modelled with a description of its use case, will grow in significance as deep learning becomes more multimodal and language models are applied to more modalities. A larger benchmark of datasets with interesting and varied contextual information will be vital to evaluate these works, especially when comparing multiple contextual algorithms. [Questions] *[1] "What do the bold entries mean in the table? The standard deviation values are pretty large, but results are bolded. Were stat tests used?"* We simply bolded entries with the largest mean values. Standard deviations are across data-splits and can thus be quite large. We added an explanation to the table describing which entries are lighted. Also, we use a statistical significance test, comparing AutoFE using a critical difference diagram, that employs a Wilcoxon test for statistical significance (taking into account multiple testing for multiple tested methods). See our answer to Weakness [1]. *[2] "Does performance saturate after 10 iterations? What is the reason for stopping there?"* Supplement F shows the cost and performance improvement when running CAAFE for 1-10 iterations. We see that performance is rising even after 10 iterations. 10 was simply the parameter that we started with. We did, however, not evaluate more iterations since we ran out of budget and could not repeat all experiments with another iteration parameter. *[3] "Do you update the prompt with the generated feature descriptions for new iterations (I think info is missing in the paper, but this seems important)?"* The prompt in each iteration contains the previously generated code and the performance after executing that feature engineering step. Thus, CAAFE can learn from previous code operations. This important information has, indeed been missing from our work, and we did include it in our new version. In line 141, we add: "F: Any code generated by CAAFE in previous iterations, as well as ROC AUC and accuracy evaluated on the validation splits." *[4] "Could you report some numerical quantification of features generated by CAAFE, like feature importance ranking?"* Yes surely - we had a plot prepared for this that shows the feature importance of multiple datasets. Unfortunately, due to the page limit (1 page) for figures in this rebuttal, we could not include these plots... We will include this plot in the final work. On average the mean importance of generated features in our trained random forests was 1.49 times that of the original features. *[5] Minor issues: line 159: should probably cite TabPFN, line 195: broken reference* Thank you, we have addressed these issues :) [Limitations] *[1] Looking at the generated code, some features and their usefulness explanations are, almost surely, hallucinations. I think It's important to mention hallucinations in the limitations section to represent this.* We added the following section to our conclusion: LLMs, at times, exhibit a phenomenon known as "hallucinations.", where models produce inaccurate or invented information. Within CAAFE, this might result in the generation of features and associated explanations that appear significant and are logically presented, even though they may not be grounded in reality. Such behaviour can be problematic, especially when individuals place trust in these systems for essential decision-making or research tasks.
Rebuttal 1: Rebuttal: Thank you very much for your constructive feedback. We believe that the reviews have helped us to improve our work significantly. For this year's NeurIPS review, it is not possible to upload a revised paper, only a one-page rebuttal PDF. Hence, we've detailed the changes we'll be making in the final paper in our responses. ### Further evaluation of baselines We've expanded our evaluation to include i. AutoFE methods: FETCH and OpenFE. ii. AutoML methods: Autosklearn and Autogluon, which perform feature engineering as part of their pipeline. You can find detailed results in Table 2 in the rebuttal PDF, and provide a summary here: 1. We find that among all feature engineering baselines and classifiers, the strongest combination is CAAFE (GPT-4) with TabPFN in terms of mean and mean rank AUROC. In addition, we find that CAAFE is the best feature engineering method for all base classifiers in both mean and mean rank AUROC, but for logistic regression. 2. We construct a critical difference plot that performs statistical tests between the feature engineering baselines. CAAFE (GPT-4) performs statistically significantly better than the baselines when using TabPFN, the strongest base classifier. Critical difference plots perform a Wilcoxon signed rank test with correction for multiple testing. 3. Notably, CAAFE improves the performance of all methods, even AutoML methods with built-in AutoFE. This underscores CAAFE's unique ability to integrate semantic/contextual information missing in traditional feature engineering. 4. CAAFE can be combined with AutoFE baseline methods and AutoML, see Table 2 in our original manuscript. CAAFE combined with an AutoFE baseline is the strongest predictor for simple classifiers (logistic regression and random forest). Here, the strength of CAAFE (context integration) is added to the strength of the baselines (combination of a large number of features). For AutoML methods that already perform AutoFE, we do not see this effect, and only the use of CAAFE is strongest. As stated in our work, we do not consider CAAFE to work in the same domain as the baselines. CAAFE performs the novel semantic/contextual feature engineering and obtains additional information, while baselines perform data transformation based solely on the data matrix. ### Integration of Semantic Information 1. We conducted an experiment blinding the semantic information to more explicitly show the influence of semantic information on CAAFE's performance. By excluding feature names and dataset descriptions, we found a significant drop in performance, from an average AUROC of 0.822 to 0.8. Details can be found in Table 1 of our rebuttal PDF. See Table 1 (one-page rebuttal PDF) for details. 2. Looking at the generated features in https://github.com/cafeautomatedfeatures/CAFE/tree/main/data/generated_code gives a good intuition about the value of semantic information (files with V4 were generated by GPT-4, V3 by GPT-3.5): Some of these generated features contain up to 9 variables combined in a meaningful way. Generating such combinations without a strong semantic prior of meaningful combinations would be extremely expensive. Also note that a feature is only validated on the training data, but performance is measured on separate test data. Overfitting can occur in the same way as when training a model - a feature extraction can be seen as just another step in model construction. When the number of samples provided is small, or the number of features tested is large, the risk of overfitting becomes greater. By considering semantically meaningful features, CAAFE includes a prior for features that are more likely to generalize to the test set. Pdf: /pdf/381c19f90654a9ba1f53791dd67b3b102e6bbfbe.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: The paper proposed Context-Aware Automated Feature Engineering (CAAFE) approach to integrate new features learned from dataset descriptions using large language models into AutoML process for tabular datasets. The proposed approach was evaluated using 14 datasets. Strengths: The paper is well-written with clear research motivation, experiment design and results illustration. It's an interesting attempt to try to integrate large language models into AutoML process. Code scripts are provided. Weaknesses: 1. Overall the use case of adding features from data description is trivial, even using large language models or human-in-the-loop concept. The proposed feature engineering process through prompt generation from the data description or any semantic information looks more of heuristic rules in featurization, which could be performed without large language models. LLM plays the role of automated code generation, instead of machine learning feature engineering approaches (e.g. PCA) incorporated in most AutoML systems. 2. The paper fails to catch up with the latest trends of AutoML systems. Most of existing AutoML systems will perform automated feature engineering, e.g. H2O AutoML, TPOT, AutoGluon, AutoSklearn, etc. Missing those AutoML baselines significantly eliminate the value of the proposed approach in modeling effectiveness. 3. In Table 1, the increased CAAFE performance is relatively trivial for most datasets, except the tic-tac-toe, which significantly impact the mean ROC AUC. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: Highly recommend the author to run the data experiments using the existing AutoML systems (listed in weakness part, they are all open source) without LLM featurizations from dataset description as baselines. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 2 fair Presentation: 3 good Contribution: 3 good Limitations: Yes. The author addressed the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We deeply appreciate the time and effort taken by Reviewer QFgW to evaluate our paper. We've carefully considered each point raised and aim to address them comprehensively below. We would also appreciate you reading through the general reviewer feedback, which outlines the changes we made and might address open questions that go beyond what you have asked for. For this year's NeurIPS revisions, it is not possible to upload a modified paper, but only upload a one-page rebuttal with additional figures. So we posted changes that we will make in the final paper as part of our replies. ### Weaknesses 1. *"Overall the use case of adding features from data description is trivial, even using large language models or human-in-the-loop concept. The proposed feature engineering process through prompt generation from the data description or any semantic information looks more of heuristic rules in featurization, which could be performed without large language models. LLM plays the role of automated code generation, instead of machine learning feature engineering approaches (e.g. PCA) incorporated in most AutoML systems."* We believe that there is strong evidence that the generated features are not trivial and significantly benefit from the LLMs knowledge. Baseline approaches such as DFS, AutoFeat, FETCH and OpenFE perform worse than our approach, and we do not believe that features these approaches propose are trivial. Also, CAAFE improves even the performance of AutoML methods, such as Autogluon and Autosklearn, which already have built-in feature engineering. Please, see Table 2 in the rebuttal PDF for details. TLDR: CAAFE (GPT-4) + TabPFN is the strongest among all methods. Using TabPFN as a classifier, a critical difference diagram shows the statistically significant performance advantage of CAAFE to all baselines. Semantic information, i.e. the context of the dataset and its columns, is crucial and can only be captured through laborious human work or our novel approach of using LLMs - this is the core of our approach. To further verify and quantify this claim, we perform an experiment where the context of the dataset is left out (i.e. feature names and dataset description are not given to the LLM). We find a strong drop in performance from an average AUROC of 0.822 to 0.8 over all datasets for GPT-4. Please, see Table 1 in the rebuttal PDF for details. Why does semantic information help non-trivially? It reduces computational complexity by considering useful features. We saw CAAFE generates sophisticated features depending on up to 9 base features. A featurization is only validated on the training data, but performance is measured on separate test data. Overfitting can occur in the same way as it does in training a model - a featurization can be viewed as just another step in the model construction. When the number of provided samples is small or the number of tested featurizations is large, the risk of overfitting becomes larger. By considering semantically meaningful features CAAFE generates fewer but more valuable features, that thus are more likely to generalize to the test set. 2. You state that our work *"fails to catch up with the latest trends of AutoML systems"* since *"most of existing AutoML systems will perform automated feature engineering".* We are very closely following the space of AutoML and are definitely aware of AutoFE being part of many AutoML approaches. The premise of our work is that classical AutoML and AutoFE cannot capture and use contextual information, which our work seeks to do. Our original manuscript contains TabPFN which is one such AutoML method, that implicitly engineers features. For this rebuttal, we evaluated further AutoML methods, namely Autosklearn and Autogluon, and use them together with CAAFE. Here we see that while baseline feature engineering methods do not improve AutoML methods, as the improvements made by them are already captured inside, CAAFE can improve even state-of-the-art AutoML methods. We explain this because the improvements gained by CAAFE come from semantic information (as we show above), which is not captured by current AutoML methods. Please, see Table 2 in the rebuttal PDF for details. There you can see that CAAFE can improve upon AutoML methods (TabPFN, Autosklearn and Autogluon) in contrast to traditional AutoFE methods. 3. *"In Table 1, the increased CAAFE performance is relatively trivial for most datasets"* The improvement of CAAFE is similar to the improvement achieved by using a random forest instead of logistic regression on our datasets, which is a drastic difference. While mean AUROC is significantly affected by strong performance on tic-tac-toe, the number of wins and the ranks are not affected by outliers. We still see that CAAFE improves TabPFN on 11 out of 14 datasets, more wins than random forest has over logistic regression. It looks similar in terms of rank improvement: [Log. Reg. -> Rand. Forest: Mean rank 5.11 -> 4.44] and [TabPFN -> TabPFN + CAAFE: Mean rank 4.39 -> 3.06]. ### Questions 1. *"Highly recommend the author to run the data experiments using the existing AutoML systems (listed in weakness part, they are all open source) without LLM featurizations from dataset description as baselines."* This question has been answered in part 1 of our "Weaknesses" replies. We hope we could address your concerns and are open to additional insights or suggestions, ensuring our research stands robust in the domain. --- Rebuttal 2: Comment: Dear Reviewer QFgW, I hope this message finds you well. As the discussion period is nearing its conclusion on August 21st, we kindly request your feedback and thoughts on our recent revisions. You have given a very low score based on criticisms that we are very confident to have addressed well, providing the experiments that you requested and giving further insights into the workings of CAAFE. Please let us know if there are any additional points you'd like us to cover. Thank you for your time and consideration. --- Rebuttal Comment 2.1: Title: Thank you for more experiments Comment: The additional experiments and evaluations are very appreciated to address my concerns and questions. I have to clarify that I'm not against the idea that semantic information will help in prediction but more of conservative in the challenges when applying in the real practice given data noises and other data uncertainties. That's why a thorough evaluation with meaningful comparison and baseline models is needed for any experiments and research like this. However, the additional evaluation still cannot fully resolve my concerns. It will be better to report confidence intervals given that the difference is still trivial in some cases. Also the more experiments you run, the more careful we need to be on the multi hypothesis testing problem, that the performance improvement is random, instead of statistically significant. I adjusted my rated by two grades but unfortunately still cannot accept it confidently. --- Reply to Comment 2.1.1: Comment: Thank you very much for your reply and feedback. We will reply to your two remaining concerns about our evaluation below: *"Also the more experiments you run, the more careful we need to be on the multi-hypothesis testing problem, that the performance improvement is random, instead of statistically significant"* In our rebuttal PDF, we include Critical Difference Diagrams in Figure 3, which implement the Wilcoxon test with a Bonferroni multiple testing correction: Critical difference (CD) diagrams are a powerful tool to compare outcomes of multiple treatments over multiple observations. For instance, in machine learning research we often compare the performance of multiple methods over multiple data sets (i.e., observations). A diagram like the one above concisely represents multiple hypothesis tests that are conducted over the observed outcomes. Before anything is plotted at all, the **Friedman test** tells us whether there are significant differences at all. If this test fails, we have not sufficient data to tell any of the treatments apart and we must abort. If, however, the test sucessfully rejects this possibility we can proceed with the post-hoc analysis. In this second step, a **Wilcoxon signed-rank test** tells us whether each pair of treatments exhibits a significant difference. Since we are testing multiple hypotheses, we must adjust the Wilcoxon test with *Bonferroni's method*. For each group of treatments which we can not distinguish from the **Bonferroni-adjusted Wilcoxon test**, we add a thick line to the diagram. *"It will be better to report confidence intervals given that the difference is still trivial in some cases. "* As stated in the previous rebittal, the improvement of our method is far from trivial with larger differences than switching from logistic regression to trees for the evaluated datasets. As you state above multiple testing problems occur and we believe a statistical test or standard deviations are more appropriate to report than confidence intervals. Would you agree?
null
null
null
null
null
null
Brain Diffusion for Visual Exploration: Cortical Discovery using Large Scale Generative Models
Accept (oral)
Summary: This paper proposes a diffusion-based method for fMRI-guided image synthesis and claims that it can identify the selectivity of various ROIs in the visual cortex. The motivation of this work is well-defined and is of vital importance to the computational neuroscience field. While the method in this work is intuitive and lacks machine learning depth. Strengths: Strengths: * Interesting scientific field and question of neuroscience. * The paper is well-written and has a smooth and concrete flow. * The experiments and visualizations are in good condition and sufficient. Weaknesses: Weaknesses * The method used in this paper is heuristic and lacks technical depth. * Generating or reconstructing images from fMRI is not a new idea, as has been proposed in [1], etc. * CLIP is an off-the-shelf large-scale pre-trained model, performing conditional diffusion-based generation with embedding from CLIP or Stable Diffusion is stereotyped [2,3]. * In Section 3.3, the domain-specific techniques you designed for this specific task are just some first-order tricks (i.e., linear combination of S, euler approximation). Maybe these tricks are effective enough, but that's overly empirical. * The experimental section is empirical and lacks theoretical analysis. Your analysis of the fMRI encoder with ROIs is interesting, but little focus had been put on what's the scientific relationship of neuroscience and diffusion model. [1] https://www.biorxiv.org/content/10.1101/2022.11.18.517004v3 [2] https://arxiv.org/abs/2112.10752 [3] https://arxiv.org/pdf/2208.01618.pdf Technical Quality: 2 fair Clarity: 3 good Questions for Authors: Please consider the things listed in the “Weaknesses” section. Also please consider providing information regarding the interpretability of the utilization of the diffusion model in your context. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 3 good Contribution: 3 good Limitations: To my knowledge and understanding, there is no potential negative societal impact of this work. Other limitations please see “Weaknesses”. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We are strongly encouraged by your evaluation that our work is interesting for neuroscience, it is well-written, and it has good experiments. We appreciate your advice on clarification. We address specific comments below and refer to the general response for results. > **Scope of the paper** We would like to clarify that our work proposes to use diffusion models as a component to elucidate the functional role of different regions of the brain. Our work does not investigate the use of diffusion models for image reconstruction from fMRI or image generation from text. We will clarify this in the upcoming revision. We are using diffusion models because they have been demonstrated to be state of the art estimators of the natural image prior [1]. These models have been trained on billions of images (the model we use is trained on LAION-2B), which is three magnitudes larger than BigGAN models trained on ImageNet. We further use the fact that diffusion models can be interpreted as a a score function (gradient of the log-likelihood), and can be combined with a derivative of an energy function (gradient of the brain w.r.t. input) [2]. > **Q1.1) Comparison to Takagi et al.** The paper by Takagi et al. [3] is relevant and interesting. It is a work we already cite (Line 67) in our own work. However the problem that we investigate is very different from Takagi et al. Takagi et al. investigates *reconstructing visual stimuli from a fixed set of voxels* in the brain. Our work explores the synthesis of **novel visual stimuli** that are predicted to activate parts of the visual cortex, not reconstruction of seen stimuli. The method proposed in our paper is intended as a tool for future data-driven investigations of the human visual cortex. As our method is gradient based, it is permutation invariant [4] and can be flexibly applied to any subset of visual cortex voxels given a voxel-wise encoding model. **To our knowledge, our paper is the among the first to explore the synthesis of images predicted to activate higher visual cortex with diffusion models.** There is concurrent work [5] which explores using diffusion models to synthesize images predicted to activate macaque V4. Previous work on diffusion models with fMRI tackle reconstruction. > **Q1.2) On text-conditioned image synthesis** To clarify, **we are not performing text-conditioned image synthesis** using CLIP embeddings as done in [6,7]. Our use of CLIP is purely as a backbone for the visual encoder for fMRI. The use of an pretrained backbone with a linear decoding layer is common in voxel-wise encoding models for fMRI neuroscience, and CLIP models are the best backbone for higher visual cortex in fMRI [8]. The optimization method proposed in [7] requires knowledge of ground truth images for a given concept, which we do not have. > **Q1.3) Use of linear combination of S and euler estimation** * **On a linear combination of S** * A linear combination of voxels (S) activations is accepted in neuroscience investigations for fMRI. In fMRI, voxels are often clustered in regions of interest (ROIs), where voxels have similar selectivity and behavior to visual stimuli. In ROIs, it is common to evaluate the average activations of all voxels using a linear combination (average of S). **This approach has been used in some of the most important papers for visual fMRI [9, 10].** * **On an euler estimation** * The euler estimate is often used when the gradients of a network are used to guide image synthesis. Concurrent work [5], which uses diffusion models to generate images that are predicted to activate macaque V4 neurons, **also uses an euler estimate to estimate a clean sample that is fed to their encoder (see e.q. 6 in [5])**. This approach is acknowledged in other work ([11] section 2.3 last paragraph; And [12] e.q. 4). We adopt this technique since the euler estimate is in the distribution of natural images, and only use it to modify the image provided to the encoder, not the diffusion output directly. > **Q2) Theoretical analysis of our work** We have included a theoretical analysis of our work in **section 9 of our supplemental.** In that section, we show that with an encoder backbone of fixed norm, the brain maximization objective for a single/multiple voxels yields an image where the CLIP image embedding is equal to the normalized voxel weight. > **Interpretability** We show in **section 4 of the supplemental** how the brain and the diffusion model work together. We find that low-level details often emerge first, and that the brain signal and diffusion signal are not always harmonious. ⠀ We genuinely appreciate the reviewer's commitment to ensuring the rigor of our paper. Their insights have undeniably contributed to its refinement. The clarifications we've provided underscore the innovative aspects of our research and its potential contributions to the field. We respectfully invite the reviewer to approach our work with a more optimistic viewpoint. ⠀ [1] Diffusion Models Beat GANs on Image Synthesis [2] Reduce, reuse, recycle: Compositional generation with energy-based diffusion models and mcmc [3] High-resolution image reconstruction with latent diffusion models from human brain activity [4] Deep Set Prediction Networks [5] Energy Guided Diffusion for Generating Neurally Exciting Images [6] High-Resolution Image Synthesis with Latent Diffusion Models [7] An Image is Worth One Word: Personalizing Text-to-Image Generation using Textual Inversion [8] What can 1.8 billion regressions tell us about the pressures shaping high-level visual representation in brains and machines? [9] The fusiform face area: a cortical region specialized for the perception of faces [10] A cortical representation of the local visual environment [11] GLIDE: Towards Photorealistic Image Generation and Editing with Text-Guided Diffusion Models [12] UPainting: Unified Text-to-Image Diffusion Generation with Cross-modal Guidance --- Rebuttal Comment 1.1: Comment: I thank the authors for a detailed response and for answering my concerns one by one. **Q1.1)** : It's true that the problem that you investigate is different from the Takagi et al. [1]. However, the method and model framework you use is much similar to them. Yours and their methods are both in the Latent Diffusion Model scope. As you mentioned in Section 3.1. **Q1.2)**: Thanks for your comment. This part makes sense to me. Your new technique of performing fMRI-conditioned image synthesis is interesting. **Interpretability**: I think the results of Section 4 of the supplementary is a characteristic of a normal diffusion model, in which it tends to generate the main features of an image in the first steps and then details in latter steps. I wonder how the gradient guidance of your method contributes to this generation process. In conclusion, while there's room for refinement in the methodology, the paper offers significant scientific contribution to neuroscience. I have revised my score in light of these observations. [1] High-resolution image reconstruction with latent diffusion models from human brain activity --- Reply to Comment 1.1.1: Comment: We appreciate the positive evaluation you have given to this paper!
Summary: The paper presents a new algorithm to guide a diffusion model to decode maximally activating images for particular voxel subregions of human fMRI using an encoding model trained to predict brain activity from images. The algorithm identifies stereotypic features for defined ROIs, such as faces or food items. The authors evaluate the specificity of the reconstructed image with CLIP zero-shot classification and human evaluation. They also cluster the encoding model weights and find that the clusters result in perceivable semantic categories in the reconstructed images. This algorithm allows for more fine-grained analysis of preferences across the visual system. Strengths: There have been many papers that reconstruct images from brain activity. However, all of them require some form of retraining the diffusion model. This approach only needs an encoding model that can be trained independently. The extensive human evaluation is great. While this goes beyond what I would ask for a NeurIPS paper, I think it would be great to also verify them by showing them back to human subjects. Weaknesses: The authors show that the reconstructed images possess stable properties that can be identified by humans and by a CLIP network. However, they don't show that the images also do what they are supposed to do, which is activate a particular subnetwork. While human experiments probably go beyond the scope of this NeurIPS paper, one way to do that would be to take **another** encoding model for the same neural activity, take it as a proxy for the brain and show the images to that. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Q1: As far as I understand $\gamma$ trades off between the denoising gradient and the maximization gradient. I couldn't find what value you set it too, or how you chose it. Q2: There is a preprint that describes pretty much the same idea as in the paper but uses it for V4 single neurons: Pawel A. Pierzchlewicz, Konstantin F. Willeke, Arne F. Nix, Pavithra Elumalai, Kelli Restivo, Tori Shinn, Cate Nealley, Gabrielle Rodriguez, Saumil Patel, Katrin Franke, Andreas S. Tolias, Fabian H. Sinz Energy Guided Diffusion for Generating Neurally Exciting Images https://www.biorxiv.org/content/10.1101/2023.05.18.541176v1 If one looks at the date, the preprint has been submitted around the NeurIPS deadline, so it's unlikely that the authors were aware of it and I would consider this as a case where two groups had the same idea. However, I think it would be fair to mention/cite them. Q3: Can you discuss the compute time required for a single image? Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Limitations are discussed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We are deeply grateful for your excellent suggestions! We will address specific questions below. Please see the PDF in the general response for additional figures. > **Additional validation** We agree with your comments on additional validation. **There is ongoing work to investigate the performance of BrainDiVE synthesized images in humans using fMRI.** Following your suggestion, we train an alternative encoding model using EVA02_CLIP_B_psz16_s8B [1], this is a very recently published backbone model with TrV blocks (in contrast to the ViT B/16 backbone with ViT blocks we use in the paper) jointly trained using mask image modeling and image-text contrastive loss (backbone we use in paper is image-text contrastive loss alone), which has been independently validated to achieve high ImageNet performance. We freeze the model weights, and train a linear probe with bias to estimate the fMRI activations. We validate that the new model can achieve high $R^2$ that is comparable to our current backbone. **Please see the PDF in the general response for a distribution plot of predicted activations.** We find that our BrainDiVE images can achieve high predicted neural activations when valdiated on a new backbone. > **Q1) Choice of $\gamma$** Indeed, a high $\gamma$ gives more weight to the brain activation gradient (more activating, less natural), while a low $\gamma$ gives more weight to the diffusion gradient (more natural, less activating). We set $\gamma$=130, and this is described in Line 149 of the original paper. There is a typo here and it was represented as $\eta$, we will update the revision to clarify this value. We performed search in increments of 10, exploring values between 10.0 up to values of 300.0, and synthesized 100 images for each of the broad category selective regions and recorded the gradient values at each step and the final synthesized images. We found that values between 110 and 150 yielded gradient magnitudes that largely matched between the brain and diffusion at early time-steps when coarse image structure emerged. The value of 130 was selected as a median value. This is the only hyperparameter that we introduce on top of diffusion models. Approaches like reflected diffusion [2] or dynamic thresholding from Imagen [3] may enable higher $\gamma$ values, and remain an avenue for future research. > **Q2) On concurrent work** We were not previously aware of the work by Pierzchlewicz et al. (2023) on "Energy Guided Diffusion for Generating Neurally Exciting Images" [4]. After closely reading the paper, we agree that the ideas in our paper and their paper are similar. Both works tackle the synthesis of images that are predicted to activate regions of the brain. A broad difference is that our paper targets the higher visual cortex in humans which are believed to encode semantic concepts, while they target V4 in macaques. This yields differences in the final images, where they primarily target the synthesis of complex visual patterns, while we primarily target the synthesis of compositional visual scenes. We thank the reviewer for bringing this interesting paper to our attention, and will include it as a citation in our upcoming revision. > **Q3) Compute time** When using fp16 diffusion models, with fp32 brain encoders, 50 steps of denoising for images at 512x512 resolution requires around 25 seconds on a Nvidia V100 (from 2017). We briefly discuss this in section 9 of our supplemental, and will clarify this in the main paper in an upcoming revision. In practice our experiments were performed on a cluster and consumed 1500 GPU compute hours. Accelerating this processing is a very interesting avenue of research, and we discuss approaches inspired by MagicMix/SDedit [5,6] in section 4 of our supplemental to speed up our work. ⠀ We hope the clarifications have been insightful. Please don't hesitate to comment if there are any additional questions! ⠀ [1] EVA-CLIP: Improved Training Techniques for CLIP at Scale [2] Reflected Diffusion Models [3] Photorealistic Text-to-Image Diffusion Models with Deep Language Understanding [4] Energy Guided Diffusion for Generating Neurally Exciting Images [5] MagicMix: Semantic Mixing with Diffusion Models [6] SDEdit: Guided Image Synthesis and Editing with Stochastic Differential Equations --- Rebuttal Comment 1.1: Title: Thanks for the clarifications Comment: Dear authors, thank you for your clarifications. I have read the rebuttal and will stand by my evaluation. --- Reply to Comment 1.1.1: Comment: We appreciate the timely response. Thank you again for the positive evaluation of this paper!
Summary: This study proposes BrainDIVE, a system for synthesizing optimal stimuli for any given region of interest in the brain. The model combines a pretrained latent diffusion model for image generation with a linear “brain encoder” trained to map CLIP feature vectors onto the corresponding brain activity. At test time, a gradient-based optimization iteratively produces an image that maximizes the activity of a predefined set of voxels. The technique is validated on well-known ROIs, producing the expected images. It can also highlight subtle differences between ROIs selective to the same broad category, or functional subdivisions of existing ROIs. The functional hypotheses are validated by human subjects evaluating the properties of the generated pictures. The technique appears simple but can provide meaningful information for neuroscience studies. Strengths: * The method can retrieve subtle differences between ROIs selective for the same class (e.g. OFA and FFA). * The method can highlight functional differences between sub-clusters of existing ROIS (e.g. food clusters) * The qualitative observations are validated by behavioral evaluations from human subjects. Weaknesses: * The Related works section tends to overstate the novelty of the technique. It only mentions NeuroGen as prior work, whereas other prior studies had also attempted to generate optimal stimuli for specific ROIs: for instance, Ratan Murty et al (2021), Ozcelik et al (2022), Ozcelik et al (2023). The latter also used a diffusion model for image generation, as done here. Furthermore, you seem to be aware of at least some of these studies, since you criticized one of them in your Methods section for using a “hand-derived” prior. It would be much better to properly acknowledge all these studies in the Related works section, and clarify your method’s advantages/disadvantages (e.g. no hand-derived prior/time-consuming iterative method). Technical Quality: 3 good Clarity: 3 good Questions for Authors: * I do not understand why you systematically need to report the top-5 or top-10 out of 500 generated images? Do I understand correctly that the pool of 500 are all generated for the same objective (ROI maximization)? If yes, then what does this imply about the remaining 99% of images? Are they worse than NeuroGen? Are they actually failure cases? What do the worse 1% of images look like? If they are not representative of the expected category, does that invalidate your optimization method? * Appendix, section 9, training objective: this section is useful to understand the image optimization process. One thing I do not understand from this section or from Figure 2 is why the optimization is performed in the diffusion latent space, rather than in CLIP space? Why do the gradients have to flow all the way into the diffusion model? Couldn’t you iteratively optimize the $CLIP_{img}$ vector for the same objective, and then use it to condition the diffusion model in one pass (as you would with a text prompt)? This should be computationally more efficient, and I don’t see any reason why it would be less accurate. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Limitations are properly acknowledged. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate your detailed and concrete suggestions! We will incorporate all of your feedback into our paper. > **Comparisons to prior work** Indeed, our work builds upon the foundations laid by Ratan Murty et al. (2021), Ozcelik et al. (2021), and Ozcelik et al. (2023). It was not our intention to imply otherwise. We originally included two of these papers in the "Related Works" sections of our paper (Murty on Line 35 and 72, Ozcelik on Line 67). In a revised version, we will explicitly highlight the connections between our work and these referenced papers and will additionally cite Ozcelik et al. (2021). Briefly: 1. Ratan Murty et al. (2021) builds upon an adversarial trained BigGAN image generator trained on imageNet with category conditioning. Their brain encoder consists of an ImageNet trained frozen ResNet50 backbone, along with a linear decoder layer. Our work utilizes a text-conditioned diffusion image model trained on score matching with classifier-free guidance [1] (∇log[p(y|x;t)] = ∇log[p(x|y;t)]-∇log[p(x;t)]; y=text, x=image). This enables our diffusion model to sample from the unconditional image distribution via score function ∇log[p(x;t)] in the absence of text conditioning. Our brain encoder consist of a frozen ViT-B/16 backbone along with a linear probe, which enables brain conditioned image synthesis. A major difference lies in the image synthesis process. As BigGAN is trained on category conditioning (1000 classes in a one-hot vector), Murty et al. perform a joint optimization over c (category) and z (latent). As the category is discrete, they use softmax(c\) to facilitate gradient optimization. It is not clear that a softmax is suitable, as BigGAN is not trained with non-discrete classes. An alternative could be gumbel-softmax [1], but this has high gradient variance. In contrast, our work performs end-to-end differentiable optimization. So BrainDiVE is not restricted to a particular ImageNet class like NeuroGen by Gu et al., or convex combination of class embeddings as in Murty et al. 2. The method by Ozcelik et al. (2023) is state of the art for brain based image reconstruction using diffusion models, and is conditioned on a pattern of brain activations. In contrast, the primary goal of our work is to generate images that are predicted to maximize a given region of the brain, and not reconstruction. In BrainDiVE, we only need to train an fMRI encoder, and the region can be flexibly defined as any subset of the brain without retraining. In Ozcelik et al., when they perform region activation, they set activated voxels to 1, and others to 0. This is followed by manual latent normalization. They achieve intriguing results using this hand-crafted heuristic, but have the implicit assumption that other voxels are zeroed when one region is active. > **Q1): On top-5 or top-10** To clarify, the top-5 and top-10 are for qualitative **visualization only**. Our numerical evaluation follows prior work in image generative models (DALL-E [2]) and brain based activating image synthesis (NeuroGen by Gu et al. 2022, citation 9 in paper), and uses top-20% like NeuroGen unless otherwise noted. Our top-5/top-10 for visualization is done automatically using the brain encoder, without any manual cherry picking. For numerical results, we use the brain encoder without manual cherry picking to evaluate the top 10% and 20% (100 or 200 images) in 4.2, and follow NeuroGen and use the top 20% in 4.3/4.4. Notably this is used for visualization *and* evaluation in OpenAI's DALL-E (called reranking) `all samples used for both qualitative and quantitative results ... use reranking with N = 512` [2] where they use `top 32 of 512 after reranking with CLIP, but we do not use any manual cherry-picking` (online post). Like DALL-E, reranking is used in NeuroGen (Gu et al. 2022) where they only analyze the top-100 of 500 images as reranked using their brain encoder (confirmed with author), and only visualize the top-k of 500 images. We also rerank using our brain encoder. In general, diffusion models do not always converge during generation. The bottom 1% for the brain generally look like the bottom 1% for text-conditioned, and usually have no recognizable objects. This is an active area of research (see Imagen's dynamic thresholding approach). > **Q2): Optimizing CLIP latent** This is indeed an idea we had as well! We believe there are broadly two ideas you are touching upon: 1. Given that we know the optimal fixed CLIP image embedding (512 or 768 dim), in theory it should be easy to condition the diffusion model on this embedding to replace the typical CLIP text embedding. In practice high performance diffusion models are text-conditioned, but are not conditioned on a fixed sized (512/768) text embedding, and instead are conditioned on 75 token embeddings (plus start/end tokens) for an entire sentence in order to model visual semantic composition. Solving for a fixed sized CLIP image embedding does not give us the full text embedding. In addition, there exists a modality gap between CLIP image and text latents [3], and closing the gap is an active area of research. 2. An alternative may be to solve for the CLIP text embedding directly via gradient descent. However current work [4] requires a handcrafted text prompt, and has been demonstrated only for single objects. In addition, these approaches require ground truth images for a given concept, which we do not have. ⠀ Thank you again for your wonderful suggestions, and we're eager to hear your thoughts on our clarifications. Please let us know if you have any additional questions or comments! ⠀ [1] Categorical Reparameterization with Gumbel-Softmax [2] Zero-Shot Text-to-Image Generation [3] Mind the Gap: Understanding the Modality Gap in Multi-modal Contrastive Representation Learning [4] An Image is Worth One Word: Personalizing Text-to-Image Generation using Textual Inversion --- Rebuttal Comment 1.1: Comment: Thank you for your responses, which I found clear and satisfactory. In consequence, I have raised my score to 7. --- Reply to Comment 1.1.1: Comment: We are deeply grateful for the positive assessment you've given to our paper! Thank you again for your suggestions and comments.
Summary: This paper introduces Brain Diffusion for Visual Exploration (BrainDiVE) that aimed at exploring the fine-grained functional organization of the human visual cortex. Motivated by the limitations of previous studies that relied on researcher-crafted stimuli, BrainDiVE leverages generative deep learning models trained on large-scale image datasets and brain activation data from fMRI recordings. The proposed method uses brain maps as guidance to synthesize diverse and realistic images, enabling data-driven exploration of semantic preferences across visual cortical regions. By applying BrainDiVE to category-selective voxels and individual ROIs, the authors demonstrate its ability to capture semantic selectivity and identify subtle differences in response properties within specific brain networks. It is also shown that BrainDiVE identifies novel functional subdivisions within existing ROIs, highlighting its potential for providing new insights into the human visual system's functional properties. Strengths: The paper's methodology shows promise in capturing semantic selectivity and identifying fine-grained functional distinctions within visual cortical regions. Also, the paper's potential significance lies in applying BrainDiVE to understand the fine-grained functional organisation of the human visual cortex. By providing insights into category selectivity, response properties, and sub-regional divisions, the paper opens avenues for further exploratory neuroscience studies. To achieve this, the authors perform the experiments in the manuscript are extensive, covering: 1. the semantic specificity of the method by decoding images from task fMRI and literature-obtained ROIs 2. compared the abstraction of face representation in the brain by comparing images decoded from two regions, the fusiform face area (FFA) and the occipital face area (OFA). 3. Use the method to extend knowledge of brain function by finding subdivisions in known areas in the cases of food decoding. The authors describe the experimental setting clearly at the beginning of section 4 for all cases. The experiments show that the manuscript has a good balance between the combination of known methodological approaches and addressing interesting questions in neuroscience. Specifically, the authors perform a commendable effort in obtaining quantitative results using human evaluators (cf Tables 3 and 4). Finally, the authors show through results evidence that their method is a window into understanding region-specificity hypotheses of brain regions which are knowingly involved in different aspects of visual processing. Weaknesses: The paper could benefit from comparing its results with previous contributions [e.g. 9 and 10] to highlight its novelty and contributions in relation to existing methods. The methodology part lacks clarity, particularly in explaining key components like the diffusion model architecture and brain-guided synthesis—further clarifying the role of the image-to-brain encoder in influencing the denoising process during inference. The generalisability and reliability of results are hard to asses through a small dataset, specifically just 10 subjects. Furthermore, the paper lacks clarity on how the sub-divisions of the visual cortex are being verified or validated. Whether the results are specific to the analysed regions or the algorithm is being too biased by the experimental condition is not clear through the experiments. For instance, what would happen if the authors try to decode an area not specific to visual processing? An example of this would be using subsections of the orbitofrontal gyrus or other brain areas not expected to perform well in reconstructing visual stimuli. As a small point authors should review the presentation of images. Figure 1 shows mostly best-case scenarios which are then not as good in Figure 4 (for instance the case of images generated from face voxels). This might bias readers. Second, some details, such as figure 4 appearing before figure 3 make reading the paper confusing. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. Validation of BrainDiVE's Effectiveness: a. Can the authors provide more details on the validation process to ensure that BrainDiVE-generated images indeed effectively activate the targeted brain regions? b. Could the authors consider conducting more extensive comparisons with other existing methods [e.g. 9 and 10], to demonstrate the advantages and uniqueness of BrainDiVE in eliciting specific brain activations? c. Could the authors show the results on decoding a baseline region which is not involved in visual processing? 2. Clarity in Methodology: The methodology part could benefit from more clarity and detailed explanations of key components, such as the diffusion model architecture and the exact implementation of brain-guided synthesis. Improving the architecture in Figure-2 might help. 3. Statistical Analysis of Qualitative Evaluation: As the paper relies on qualitative evaluation with 10 subjects, could the authors mention this explicitly and what limitations are expected from this in the limitations section? 4. Ethical Considerations: Could the authors describe the demographics of the population, or acknowledge the lack of this information to inform of wether the results are biased to a specific gender or population group. 5. Reproducibility: Are the authors intending to release the code and not publicly available data such as the scores produced by the human raters upon acceptance of the paper? This is key to guarantee reproducibility Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Authors have not addressed the negative societal impact of their work, but this can be fixed by adding specific text in the Discussion section. The authors don't mention the demographic characteristics of the small human subject database that they have used not mention any ethical concerns related to decoding images from brain activations. This is fixable so authors should address it. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the excellent suggestions. We address specific questions below, and will include additional details in the general response. > **Formatting and presentation** We will update our paper to follow other image synthesis papers like OpenAI's DALL-E 2 [1] and GigaGAN [2], they similarly select/curate images for the first figure and note this in the caption. We will also update the figure numbering. > **Q1.1) Validation of effectiveness** For within ROI clusters, OPA & food clusters remained stable across random cluster initializations, and showed the highest cosine distance for voxel embeddings among all ROIs we examine. More broadly, we agree on the importance of validating and conducting fMRI studies to evaluate BrainDiVE. **We have ongoing work in this area**. In the paper we perform validation via two methods: 1. When we target voxels from widely accepted category selective regions (faces, places, bodies, food, word), we perform CLIP 5-way classification using a natural language probe. The probe sets follow the guidelines detailed in section 3.1.4 of [4]. We provide our full probe set in page 26 of our supplemental. We found that images created using BrainDiVE were aligned with scientific consensus for broad category selective visual regions. 2. When voxels from single ROIs (FFA/OFA), and within ROIs (Food/OPA) are targeted, we perform behavioral studies. Ten different subjects were recruited for each region (FFA, OFA, Food, OPA) via Prolific. This yielded 80 evaluations per-subject per-question, for 10 subjects each. We find that BrainDiVE can highlight differences between ROIs (FFA vs OPA) and within ROIs (Color in food voxels, and indoor/outdoor in OPA). This is important as it may facilitate future open-ended data driven exploration of the visual cortex. We further validate using an alternative fMRI encoder backbone in the **response PDF**, and find our images can achieve high predicted activations. > **Q1.2) Comparisons with existing methods** We concur that our work builds upon Gu et al. and Ponce et al. ([9, 10] in text). In particular, the setup of our work is similar to [9], as we both use the NSD dataset and operate offline (no live subject). Our work is not directly comparable to [10] as their experiments use real-time macaque brain feedback, which our work and [9] do not have. Using BrainDiVE, we probe the visual cortex selectivity of the brain at three hierarchical stages, and offer new insight on region wise selectivity. Briefly, our work differs from [9] in two important ways: 1. We use a diffusion model trained on billions of internet images, three magnitudes larger than BigGAN (ImageNet), this is important as we are not restricted to single object images that dominate ImageNet. We further leverage the model's training with classifier-free guidance [5] (∇log[p(y|x;t)] = ∇log[p(x|y;t)]-∇log[p(x;t)], with y=text, and x=image) to use the model in an unconditional mode, and combine it with our brain encoder to enable brain conditioned sampling. This is in contrast to Gu et al., where they use BigGAN trained on imageNet and condition on class-labels. 2. The brain optimized image generation process is very different. [9] uses search then optimize. They first sample images the 1000 ImageNet classes, and **non-differentiably** select the top-10 classes for each region. Finally they perform brain gradient updates of the GAN latent. This results in images with less diversity, and many of their images are nearly identical. In contrast, our work yields diverse images by using end-to-end differentiable optimization, **and do not restrict the search space to a fixed image category a priori**. Please see the **response PDF** for more comparisons. > **Q1.3) Non-visual cortex results** Please see **response PDF** for OFC results. As expected, we find our method does not yield consistent semantics in OFC. > **Q2) Clarity in methodology** We agree that this could be further clarified. We will provide an updated Figure 2 in an upcoming revision. The brain encoder predicts fMRI activations from images. Diffusion models predict the score (derivative of log-likelihood) of the image distribution, and can be treated as a special class of energy-based models. A brain encoder that outputs real-numbers can be interpreted as an energy function [6], the derivative of which can be additively combined with the diffusion to enable conditional sampling of naturalistic images that maximize brain response. > **Q3) Statistical analysis of human evaluation** Each of the 10 subject performed 80 binary evaluations per question, we collect 8800 total responses (11 questions, 10 splits, 4 NSD subjects, for both BrainDiVE/real images). Due to space constraints, we provided standard error (SE) measurements in section 5 of the supplemental. We will provide further statistical analysis in an upcoming revision. > **Q4) Ethical Considerations** For 30 subjects, they are between age 22~63, 14 women/15 men/1 unknown, 28 white, 2 black, 1 mixed, 1 unknown. We currently have a "Broader Impacts" section in the supplemental, we will update this in the revision to discuss the demographics and additional issues. > **Q5) Code release** Yes, we will release code and human evaluation data upon acceptance. We have also sent the AC a comment linking to an anonymous repo containing our code. ⠀ We're grateful for your clear and helpful suggestions. In light of our response, we hope you might view our work in a more positive light. Please feel free to let us know if you have additional comments! ⠀ [1] Hierarchical Text-Conditional Image Generation with CLIP Latents [2] Scaling up gans for text-to-image synthesis [3] Selectivity for food in human ventral visual cortex [4] Learning transferable visual models from natural language supervision [5] Classifier-free diffusion guidance [6] Reduce, reuse, recycle: Compositional generation with energy-based diffusion models and mcmc --- Rebuttal Comment 1.1: Comment: Thanks for the responses. I have updated my score to “Accept” following the suggestions for the evaluation of a good contribution with impact in a specific field --- Reply to Comment 1.1.1: Comment: We are very grateful for the positive assessment you've given to our work! We would like to again express our appreciation for your suggestions. --- Rebuttal 2: Comment: We appreciate the request for code. The anonymous github for the paper is now in a publicly visible comment. Also reproduced here for convenience: https://anonymous.4open.science/r/BrainDiVE_code-FF0C
Rebuttal 1: Rebuttal: We are grateful to all reviewers for their constructive suggestions, which we agree will significantly improve the communication of our work. We are very encouraged by reviewers’ evaluation on the quality of this paper. All four reviewers find the work interesting ("methodology shows promise in capturing semantic selectivity" (MC3H); "The method can retrieve subtle differences between ROIs" (fpHd); compared to prior works -- "this approach only needs an encoding model that can be trained independently" (SV7e); "The experiments and visualizations are in good condition and sufficient" (Vg6A)). ### General clarifications ### 1. Scope and experiments * We propose BrainDiVE -- a method that utilizes diffusion models to investigate functional specialization in the higher visual cortex. * Our work relies on end-to-end differentiable optimization with an image-computable voxel-wise fMRI encoder, and leverages an existing large-scale image diffusion model without retraining. It can be flexibly targeted at arbitrary regions in the visual cortex without retraining. * We apply BrainDiVE to explore the brain's visual cortex selectivity at three hierarchical levels and offer new scientific insight on the selectivity of different regions. * First, we apply it to widely accepted category selective regions -- faces, places, bodies, food, words. * Second, we apply it to ROIs that code for faces at different levels in the feature hierarchy (FFA, OFA). Our results are in line with the widely held belief that OFA responds to lower level face features relative to FFA. * Third, we apply it to splits within food-selective and place-selective (OPA) ROIs, where we identify potentially novel functional subdivisions. * Our evaluation is done in two different ways * For category selective regions, we perform CLIP 5-way classification using natural language. The design of our prompts follow the best practices defined in 3.1.4 of [1]. The prompts are available in section 9 of the supplemental. We observe that BrainDiVE images indeed capture the category specificity of these regions. * For FFA and OFA, which both code for face; subregions of food; and subregions of OPA which codes for scenes -- we perform a human behavioral study to validate the fine-grained visual-semantic attributes. We recruit 10 subjects for each set of comparisons. This results in a total of 8800 total responses (11 questions, 10 splits, 4 NSD subjects, for both BrainDiVE and real images). We report standard error metrics in section 5 of the supplemental. We find that BrainDiVE can highlight differences in preferred attributes between visual regions, suggesting that it can be useful for future data driven exploration of the visual cortex. The ROI masks are derived from functional localizer results from the official NSD paper (faces, places, bodies, words, OFA, OPA), or obtained from other authors directly (food) [2]. We include visualizations of the image synthesis process and brain gradients in **section 4** of the supplemental. We perform a theoretical evaluation of our work in **section 9** of the supplemental. We agree in the importance of evaluating the images using real humans, and an effort to collect fMRI data is ongoing. ### 2. Relationship to concurrent and prior work To our knowledge, we are the first to apply diffusion models to investigate the selectivity of the human higher visual cortex. Unlike prior work, we are not doing image reconstruction. There is concurrent work by Pierzchlewicz et al. [3] which applies diffusion models to investigate the selectivity of macaque V4, we will cite this in an upcoming revision. In our work we cite papers which we believe are highly relevant (in-paper citation numbers): Gu et al. [9], Ponce et al. [10], Murty et al. [11], Takagi et al. [46], Ozcelik et al. [47]. We build upon the insights from these works. Our work is most similar to [9], as we use the same dataset, while [10] uses real-time responses of macaque visual cortex neurons for image synthesis. * Unlike [9, 11], we perform end-to-end gradient optimization, and do not rely on fixed categories identified via search as in [9], or a softmax relaxation of categories as in [11]. In addition, we use a diffusion model trained on billions of images. Due to this, our synthesized images are not restricted to single object images that dominate the 1000 classes from ImageNet as in [9, 11]. * Different from [46, 47], we are not performing reconstruction of visual stimuli. Instead, we are proposing to use a diffusion model as a component for the synthesis of novel visual stimuli that is predicted to match the selectivity of a region. We do not rely on a fixed sized input, and regions can be flexibly defined without retraining. [47] achieves intriguing results by setting voxels to 0/1 followed by latent normalization, but this relies on the implicit assumption that voxels outside a region are inactive. We will include a more extensive discussion of related work in an upcoming revision. ### 3. New figures/results in response PDF 1. We further validate the BrainDiVE results using an encoder with an alternative pretrained backbone. We find that the results are robust. 2. There is a more extensive comparison of BrainDiVE and NeuroGen from Gu et al. This highlights the semantic fidelity and diversity of our images. 3. We apply BrainDiVE to a region outside the visual cortex, which is not known to have visual selectivity. We confirm that the images do not show consistent semantic trends. Additional results will be included in an upcoming revision. ⠀ We genuinely appreciate the suggestions, and believe our paper will be improved with your feedback. Please let us know if you have any additional questions or comments! ⠀ [1] Learning transferable visual models from natural language supervision [2] Selectivity for food in human ventral visual cortex [3] Energy Guided Diffusion for Generating Neurally Exciting Images Pdf: /pdf/0fa77441a3770e6347f9d23304ddc7b540902051.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
LuminAIRe: Illumination-Aware Conditional Image Repainting for Lighting-Realistic Generation
Accept (poster)
Summary: This paper proposes a method for illumination-aware conditional image repainting (LumiAIRe). Different from conventional conditional image repainting (CIE), LumiAIRe combines environmental lighting estimation, 3D normal estimation, and illumination injection for achieving harmonized lighting effects in both foreground and background regions. To validate the effectiveness of the proposed illumination-aware repainting method, a new dataset named CAR-LumiAIRe with 52581 composited images using 198 detailed 3D car models and 1321 background images was proposed. The overall performance of LumiAIRe is convincing especially on the generated shadows in the background images and the illumination of the car in the foreground images. The effectiveness of the proposed method was well validated by the experiments and ablation studies. Strengths: + Combining physical information into vision tasks is a promising and practical direction. The idea of incorporating lighting and 3D geometric information for illumination-aware conditional image repainting is interesting and sound. + The repainted car images are with more plausible lighting effects than previous methods. + The problem formulation is clear and also keeps consistent with the equations in Sec.5, which is easy to follow and understand. + For dataset creation, the insertion points are carefully specified by segmenting the "placeable flat ground", which is more physically plausible than previous datasets. Weaknesses: - The variation of the shape of the car in the results and experiments is too small. This means that the shape of the repainted cars is too similar to each other, which may limit the shape generalization capacity of the proposed method. - CAR-LumiAIRe was rendered at the resolution of 256x256, which is not enough to support high-resolution CIE tasks. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: - Why not also estimate the 3D shape and position of the car for direct relighting without using so many neural networks? - In Fig.7, the authors claim that "the illumination injection helps foreground generation by comparing ours-HA and ours-HAI". However, although missing some sort of lighting effects, in my opinion, the results of ours-HAI look better than ours-HA, please clarify this in the response. - How about other types of objects? Adding a type of object with more significant shape variation than cars will significantly improve the quality of the paper, but will clearly take much more effort, so this is only an option but not serious advice. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your very detailed review and suggestions, and here we give responses to the mentioned concerns. We are looking forward to discussing with you during the author-reviewer discussion period. ### Concerns about data diversity We have tried our best to collect car models with sufficient diversities. However, most available car models are biased in shape (see dataset statistics in the response for **reviewer oBS5**). This issue could be solved by introducing other categories of data which is less biased in shape distribution. ### Concerns about the resolution of the proposed dataset We use the low-resolution version ($256\times 256$) of CAR-LuminAIRe dataset in all of our experiments mainly to save computation resources and accommodate the memory limit of the GPU. We have also rendered a high-resolution version ($512\times512$) of the CAR-LuminAIRe dataset, which would be released upon acceptance to facilitate high-resolution tasks better. ### Discussion about potential full 3D solutions > Why not also estimate the 3D shape and position of the car for direct relighting without using so many neural networks? The motivation for our work starts from the image-level repainting tasks, and the finally desired output is in image space, which is user-friendly for people without expert skills. Therefore, the full 3D pipeline is unnecessary in our task setting. If we adopt the full 3D generation and pose estimation for the car, then we still need explicit lighting estimation for relighting. Besides, to generate reasonable shadows from 3D foreground objects, a (partial) 3D reconstruction of background scenes may also be needed (otherwise, we are back to using neural networks to generate shadows, which might be more complicated since the inputs are in 3D), which would be rather challenging considering the single image setting of our task. Our pipeline breaks down the LuminAIRe task into multiple easier sub-problems using different networks (all have simple architectures). However, a full 3D solution may be more challenging for networks to handle (*e.g.*, how to map the user-specified conditions onto the 3D relightable models), with more expensive computations for 3D processing. ### Discussion on visual qualities in the ablation study > However, although missing some sort of lighting effects, in my opinion, the results of ours-HAI look better than ours-HA. The main reason hurting the visual quality of **Ours-HA** may be the part of the car body where the color is not correctly generated, which corresponds to the “unspecified car part” semantic, as the parsing mask shows. Without hierarchical labeling enhancement in training, the trained models sometimes may wrongly apply/not apply a “strange” color to this part. For the specific case (the top row of **Fig. 7** of the main paper), **Ours-HAI** happens to apply a color close to (darker) the car paint color and may look better than **Ours-HA** taken when ignoring lighting effects (a similar effect is also observed for **Ours-H**). What we want to emphasize by that case is, without illumination injection, the lighting effects are rather unrealistic, leading to a much darker blue color and window glass. Statistically speaking, **Ours-HA** should perform better than **Ours-HAI** in terms of faithfully following the user-specified conditions (R-prcn and SSIM) and realism (FID and M-score), as indicated by the scores in **Tab. 1** of the main paper. A similar trend can also be observed from the first row of **Fig. II** in the attached pdf, where **Ours-A** demonstrates better lighting effects than **Ours-AI**, showing the effectiveness of the illumination injection. ### Scalability of the proposed method Please refer to **Sec. II** of the global response. --- Rebuttal 2: Comment: I would like to thank the authors for their efforts and responses. Although the authors provide a general explanation for the generalization of the proposed method, I still cannot be convinced until it is further validated. I think that this paper still needs to be throughtly polished and experimentally validated. The paper currently does not meet the standards accepted by NeurIPS 2023, and I still keep on rejecting this paper. --- Rebuttal Comment 2.1: Comment: Dear Reviewer oBS5, it looks like you post your response to a wrong place :)
Summary: This paper presents the ilLumination-Aware conditional Image Repainting (LuminAIRe) task to address the unrealistic lighting effects based on recent conditional image repainting (CIR) methods. The main contributions include : 1) introducing a new task of ilLumination-Aware conditional Image Repainting (LuminAIRe), 2) designing a full LuminAIRe pipeline to acquire more realistically repainted results, and 3) collecting a new dataset CAR-LUMINAIRE with rich material and lighting condition variants. This paper is generally written and has a clear layout. Strengths: This paper main presents the ilLumination-Aware conditional Image Repainting (LuminAIRe) task to address the unrealistic lighting effects based on recent conditional image repainting (CIR) methods. To this end, the authors introduce a new task of ilLumination-Aware conditional Image Repainting (LuminAIRe), 2) designs a full LuminAIRe pipeline to acquire more realistically repainted results, and 3) collects a new dataset CAR-LUMINAIRE. This task is interesting, and the constructed dataset is useful for illumination community. Weaknesses: The work collected a new dataset CAR-LUMINAIRE, which is interesting and useful for the illumination community. However, in the process of collecting the dataset, how to perform the "warping" to achieve aligned envmap? How to consider the scale of foreground objects? I think it is necessary to conduct a statistical analysis of the synthesized dataset. When estimating background illumination information, how to address the impact of the content occluded by foreground objects on the overall illumination information estimation? Especially for larger foreground objects? Compared to the image harmonization task, this paper considers the shadow generation of foreground objects. How do the authors evaluate the quality of generated shadows? The variety of foreground objects in the dataset is extremely limited, which greatly limits the application of the method. More types of foreground objects may be more convincing and improve the robustness of the method. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: see the Weakness Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 2 fair Presentation: 2 fair Contribution: 3 good Limitations: see the Weakness Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your very detailed review and suggestions, and here we give responses to the mentioned concerns. We are looking forward to discussing with you during the author-reviewer discussion period. ### More details on dataset creation > How to perform the "warping" to achieve aligned envmap? Since we adopt the spatially-uniform global lighting assumption for the outdoor scenes, we do not consider the depth when warping the environment map, and the warping is essentially a rotation that makes camera coordination of the virtual camera aligned with the coordination of the environment map (*i.e.*, making the virtual camera points toward the center of the aligned environment map and the up direction of the virtual camera is also the up direction of the aligned environment map). The rotation can be derived from the camera pitch and yaw (we use no camera roll to keep the ground in the cropped images level). > How to consider the scale of foreground objects? As stated in **L. 158-159** of the main paper, all car models are resized to fit their real-world dimensions. Therefore, the scales of foreground objects appearing in the image are decided by the camera FoV used, the 2D insertion point, and the accuracy of the off-the-shelf normal and depth estimation results (influence the conversion from the 2D insertion point to 3D relative position calculation). By converting the depth estimation results into real-world units, ideally (if all estimations are perfect), the rendered cars would be precisely the same size as a real car at the corresponding 3D point in the scene. However, due to estimation errors, the scale of the rendered objects may not always be reasonable, and therefore filtering of rendered data is conducted. > I think it is necessary to conduct a statistical analysis of the synthesized dataset. Thanks for the helpful suggestion. We follow it to provide more statistical analysis results of our Car-LuminAIRe dataset: 1. Portions of pixels of foreground region. | 10%~15% | 15%~20% | 20%~25% | 25%~30% | 30%~35% | 35%~40% | 40%~45% | 45%~50% | | :-----: | :-----: | :-----: | :-----: | :-----: | :-----: | :-----: | :-----: | | 27.9% | 21.2% | 16.4% | 12.0% | 8.4% | 5.6% | 3.7% | 4.8% | 2. Distributions of the car types. | hatchback | sedan | race | liftback | CUV | pickup | micro | roadster | | :-------: | :---: | :--: | :------: | :---: | :----: | :---: | :------: | | 14.6% | 22.0% | 6.0% | 8.8% | 14.0% | 5.3% | 3.6% | 2.7% | | SUV | MPV | minivan | sports | coupe | universal | minibus | super | van | | :--: | :--: | :-----: | :----: | :---: | :-------: | :-----: | :---: | :--: | | 6.7% | 1.8% | 0.9% | 3.8% | 3.5% | 2.3% | 0.2% | 2.5% | 1.3% | 3. Distributions of the types of car paints. | metallic | clearcoat | frosted | flake | diffuse | | :------: | :-------: | :-----: | :---: | :-----: | | 18.2% | 26.8% | 22.9% | 19.8% | 12.3% | 4. Distributions of the used camera FoVs. | $27\degree\sim31\degree$ | $31\degree\sim36\degree$ | $36\degree\sim41\degree$ | $41\degree\sim46\degree$ | $46\degree\sim51\degree$ | $51\degree\sim56\degree$ | $57\degree\sim61\degree$ | $61\degree\sim66\degree$ | | :----------------------: | :----------------------: | :----------------------: | :----------------------: | :----------------------: | :----------------------: | :----------------------: | :----------------------: | | 3.1% | 4.3% | 8.9% | 21.2% | 23.0% | 22.1% | 11.9% | 5.5% | ### Concerns about lighting estimation As shown in **Fig. 3** of the main paper, our lighting estimation network (NetL) only takes the background image as input. The foreground region of the background image is masked with zeros. Thus there are no foreground objects in the background images distracting the lighting estimation process. Of course, having fewer informative pixels imposes a challenge, we address this issue through three designs: 1. The NetL is trained with randomly chosen foreground masks applied on contact background images, forcing the network to have more robustness to occlusion (sorry for omitting the detail in the supplemental material). 2. The lighting representation we adopt is relatively less complex, which lowers the difficulty of getting a reasonable lighting estimation from limited pixel observations. 3. The synthetic dataset is filtered wrt. the portion of foreground pixels (**Sec. 8.1** of the supplemental material), avoiding extreme circumstances where the background is heavily occluded and the lighting can be barely identified. ### Evaluations of generated shadows > How do the authors evaluate the quality of generated shadows? As suggested by Reviewer Xnyj, we conduct separate evaluations of foreground and background regions, and the results are shown in **Tab. I** in the attached pdf, where **Ours bg.** performs better than **Original bg.** by a large margin, indicating that the generated shadow makes the repainted background region closer to the reference background. As the low FID score tells us, our background network (NetB) effectively adds a realistic perception to the overall repainting results. ### Scalability of the proposed method Please refer to **Sec. II** of the global response. ### Clarifications In our experiment setting, no ready foreground objects are awaiting to be harmonized with the background image. The foreground objects are generated **from scratch** following the user-specified conditions. Therefore, strictly speaking, our method is not directly comparable with image harmonization methods. Nevertheless, **Fig. II** in the attached pdf gives an impression of how image harmonization methods handle challenging lighting effects. --- Rebuttal 2: Title: Thanks for your comment Comment: Thank you for your reply and for acknowledging our efforts. (By the way, it appears that your reply might have been posted in the wrong thread.) We understand your concerns about the generalization of our method. We have shown the generalization ability to in-the-wild images in the **Fig. 13** of the originally submitted supplemental material and to casually drawn parsing masks in the **Fig. I** of rebuttal pdf. We will carefully discuss dataset limitations and scalability and incorporate more experimental results in the final version of our paper. We are eager to hear more valuable suggestions for polishing our paper.
Summary: This paper proposes a new method called LuminAIRe to address unrealistic lighting effects in image repainting analogous to cut-and-paste object insertion. This method estimates 3D geometry and environment lighting conditions from background images and parsing masks and uses physically-based illumination rendering and attention modules to inject physically-correct lighting information into the image generation process. The result is repainted images with harmonized lighting effects in both foreground and background regions. To facilitate and validate this task, a new dataset called CAR-LUMINAIRE with lighting annotations and appearance variants has been collected. Strengths: - Unrealistic lighting effects in image repainting is an open task, especially getting the background shadows right. This paper attempts to have a solution to address it - The use of physically-based illumination rendering and attention modules is interesting - Results in repainted images with harmonized lighting effects in both foreground and background regions appear plausible to some extent - CAR-LUMINAIRE dataset with lighting annotations and appearance variants though not realistic as real images but potentially could be used for prototyping experiments Weaknesses: - The paper does not compare or cite several relevant previous works [1, 2, 3, 4, 5] - L48-49 "As far as we know, the illumination-awareness in image editing tasks has not been emphasized yet" is untrue. For instance see [2, 3, 4] - LuminAIRe pipeline has several components similar to and the current work has completely overlooked previous works and their contributions [see 3] - The current ablation study does not help figure out which components are adding to improved results. A leave-one-out ablation is crucial to highlight the differences. That is an ablation (a) without Hierarchical Labeling (b) without illumination attention and (c) without illumination injection is necessary. - There should also be an evaluation comparing foreground and background consistency separately to indicate the difference in the effectiveness for both these regions. A comparison with Image Harmonization methods for the foreground objects and perhaps also with [4] would help clearly understand how these methods compare with image harmonization and reshading methods - Many of the results look unrealistic and we do not how they compare to other works like [3 or 4] that show examples of the real world. - Missing real-world examples. Cars have complex material properties with paints and glitter in them. Simple parametric model representation of lighting cannot capture those complex lighting effects. Most of the evaluation is shown in synthetic datasets where the object appearance is mostly diffuse without strong specularities or reflections in them. There is no clear evidence in the current paper that the method would translate well to real scenes as well. - Results demonstrating the change in appearance (both foreground and background) when the environment's maps are rotated by fixing the camera and scene geometry would also help in understanding how good the results are with changes in lighting conditions. [1] People as Scene Probes. Wang et al. ECCV 2020 \ [2] Repopulating Street Scenes. Wang et al. CVPR 2021 \ [3] GeoSim: Realistic Video Simulation via Geometry-Aware Composition for Self-Driving. Chen et al. CVPR 2021 \ [4] Cut-and-Paste Object Insertion by Enabling Deep Image Prior for Reshading. Bhattad & Forsyth. 3DV 2022 \ [5] CADSim: Robust and Scalable in-the-wild 3D Reconstruction for Controllable Sensor Simulation. Wang et al. CoRL 2022 Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: The rebuttal must address the following points raised in weakness: - Clear comparisons with the methods listed in the missing references - Recommended ablations - Real-world examples - Foreground background separate evaluations - Correcting claimed contributions on "illumination-aware image editing tasks" - results demonstrating the change in appearance when environment maps are rotated for the same scene and obects Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: - Missing citations, references, and comparison - Missing comprehensive ablation - Missing real-world examples - overstating claimed contributions and undermining several related work contributions Flag For Ethics Review: ['No ethics review needed.'] Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your very detailed review and suggestions, and here we give responses to the mentioned concerns. We are looking forward to discussing with you during the author-reviewer discussion period. Due to the length limit, please refer to the global response for detailed reference items. ### More citations needed We will add citations and discussions of relevant works mentioned [VII-XI]. [VII] and [VIII] leverage **timelapse image sequences** of street scenes for scene decomposition and extraction of pedestrians and cars (2D image), which are further combined with lighting-based object retrieval, shadow network, and sun position estimation to composite street view images with illumination-harmonized objects inserted. [IX] and [XI] use **registered video and LiDAR sensor data** as input to create 3D mesh assets of cars and then insert the 3D car models into the scene with geometry constraints and shadow generation in the video clips for autonomous driving data enhancement. [X] takes the **background image, foreground image, and mask** of the object to be inserted as input. The reshaded image is lighting-harmonized by forcing the consistent image decomposition using deep image priors and reconstruction supervision. We respectfully believe that it is unfair to compare our method with [VII], [VIII], [IX], and [XI] due to the **inconsistent inputs**. The only method that shares similar input conditions to us is [X]. We find the starter code of the suggested method [X] is released. However, a time-consuming per-image optimization process (~10 min) is needed for each input image. Therefore we are unable to compare with this work given the limited time during the author response period. We will consider a comparison in the final version. ### More ablation results As suggested, we test two additional ablation variants: **Ours-A** and **Ours-AI**. The results are shown in **Tab. I** and **Fig. II** in the attached pdf. Please note that there is no “**Ours-I**” (and “**Ours-HI**”) variant since the illumination attention module cannot be enabled alone without illumination injection. The qualitative results show that **Ours-A** gives results of inconsistent lighting effects and wrongly loses/adds highlight effects in the first/second rows, while **Ours-AI** has no awareness about environment lighting and therefore wrongly gives a diffuse appearance/hallucinates lighting effects in the first/second rows. The trends of the quantitative scores also confirm the visual perceptions. So far, we have tested all possible ablation variants: **Ours**, **Ours-H**, **Ours-A**, **Ours-AI**, **Ours-HA**, and **Ours-HAI**. ### Comparison with image harmonization methods As stated above, we cannot compare with [X] during rebuttal time. Instead, we compare with two latest image harmonization methods: **DHT+** [I] and **PCT-Net** [II]. Since there are **no foreground objects to be harmonized in the background images** and we care about to what extent image harmonization methods can recover realistic lighting effects, we use the generated results of **Ours-AI** (without illumination injection in foreground generation) as inputs of **DHT+** [I] and **PCT-Net** [II]. The results are shown in **Tab. I** and **Fig. II** in the attached pdf. There are reasonable leads in the M-score of **DHT+** [I] and **PCT-Net** [II], indicating better integrity of the harmonized results. The **PCT-Net** [II] also improves the perception of realism compared with **Ours-AI**, as indicated by the decrease in the FID score. However, the degraded the R-prcn and SSIM scores indicate the harmonized images may deviate from the user-specified conditions, as also shown in the qualitative results (**Fig. II**), where the harmonized images do not show better lighting effects and may have severe color shifting issues. We will add such comparisons in the final version. ### Real-world examples Please refer to **Sec. I** of the global response. ### Separate evaluations for foreground and background regions We conduct separate quantitative evaluations as suggested. When evaluating the foreground/background regions in the full images, we mask the background/foreground regions with zeros. The quantitative results are reported in **Tab. I** in the attached pdf (the R-prcn and M-score metrics are only meaningful for full images and thus not computed), which shows that our method also outperforms baseline methods wrt. foreground generation and the trends are consistent with the **Realistic** preference of the user study results shown in **Tab. 1** of the main paper. Besides, **Ours bg.** also performs better than **Original bg.** used by baseline methods, showing the effectiveness of our background repainting network (NetB). ### Correcting the overclaim We are sorry about some overclaim and agree that several prior works on image/video editing have considered illumination, so we will carefully revise our contribution claims in the final version. ### Illustrations of rotating estimated lighting As suggested, we conduct the experiment of rotating estimated lighting while keeping other conditions untouched. The results are shown in **Fig. III** in the attached pdf. “No light” denotes the repainted results with no illumination information (which is not **Ours-AI**, but **Ours** with illumination disabled at test time), and the degrees (*e.g.* $0\degree/180\degree$) mark the clockwise azimuth rotation angles (in the camera coordinates) of the estimated lighting in the first/second row. The corresponding illumination images are shown as insets. Without the illumination information, the generated foreground becomes flat and has no lighting effects, validating the effectiveness of the illumination injection. As the estimated lighting rotates, the illuminations/appearances of foreground regions correctly reflect the changes in the lighting effects while the repainted backgrounds show reasonable shadows and demonstrate visually realistic perceptions. --- Rebuttal Comment 1.1: Title: Response to author rebuttal Comment: Thank you for the rebuttal and for addressing the concerns I highlighted in my initial review. I revisited the supplementary material, focusing on Fig 13. Unfortunately, the scenes mainly come across as unrealistic. The cars, in particular, have a synthetic and diffuse appearance. They lack the intricate details and complexities we expect from real cars. While I recognize and appreciate your efforts, the appearance in Fig 13 doesn't fully address the concerns about realistic representations of objects. Additionally, Fig III from your rebuttal doesn't effectively illustrate the subtle nuances associated with changing lighting conditions. The variations in lighting and their subsequent effects on the scene are nuanced. This makes it difficult to draw clear conclusions from the figure. The paper's premise is indeed interesting. However, the depth of experimental analysis seems to be lacking. A thorough literature review, and a detailed comparison with related works, particularly methods like cut-and-paste reshading, is crucial. This will help in solidifying the novelty and effectiveness of your approach. I'd like to stress the importance of evaluations that mirror real-world scenarios. I understand that the cut-and-paste reshading method might be time-consuming, taking up to 10 minutes per scene. However, its comparison remains essential. Evaluating your proposed method, even on a smaller test set against the cut-and-paste method, would offer valuable insights. Such an evaluation can shed light on the strengths and potential improvements of your method. It can also position it as a promising approach for the cut-paste insertion task. I'd also suggest a more in-depth qualitative and quantitative evaluation when lighting conditions are rotated, building upon your analysis in Figure III of the rebuttal. Considering the points mentioned, I believe the paper isn't ready for acceptance in its current state. A major revision would be beneficial. I recommend you address these issues thoroughly and think about resubmitting to the next suitable venue. I hope you'll find this feedback constructive. --- Reply to Comment 1.1.1: Title: Thanks for your comment Comment: Thanks for your reply and recognizing our efforts to address your concerns. We will discuss and distinguish the mentioned works [VII-XI] and add experimental results of separated evaluations, more ablations, lighting rotation, and comparisons with relevant single image editing works [I, II, X] as suggested in the final version of our paper. Below we respond to your new comments. We are aware that our generated cars in **Fig. 13** of the originally submitted supplemental material and in our Car-LuminAIRe dataset lack some details compared with real cars. These details may be crucial in dedicated car simulation tasks [IX, XI]; however, they are not in our main focus and contribution to **the lighting-realistic generation and perception with user-controlled semantics**, and collecting these details for learning-based training is also far beyond the feasibility of data collection. We agree that the cut-and-paste reshading method [X] is a related work to be discussed. However, we must again address the critical difference which makes the comparison essentially unfair (as well as comparisons with image harmonization methods): our method adopts **a lighting-realistic generation** for the repainted foreground region, where **the foreground object is generated faithfully obeying the semantics provided by the parsing mask and attributes**, while the above-mentioned reshading/insertion/harmonization methods [I, II, VII, VIII, X] adopt **a cut-and-paste process** for the foreground content and handle the lighting effects afterward separately, which prevents the users from editing/controlling the object content and requires **a ready-to-use foreground object image from somewhere else** (probably more intricate details and complexities). To sum up, these methods can not handle the lighting-realistic conditional image repainting task proposed in our paper. Nevertheless, we have tested several image cases with the cut-and-paste reshading method [X], adopting the same setting used in **Fig. II**, and the comparison on a larger test set is running. In preliminary results, the reshading results tend to be smooth (no highlight effects), the color in the background may be incorrectly baked into the foreground region, and the dark and bright shading changes may be unharmonized with the background lighting. This is likely due to the complex or noisy background images preventing reasonable image intrinsic decomposition, the deep image priors used being less able to represent high-frequency outdoor illumination, and the learned shape priors unable to generalize well to the cars in our dataset. Since no extra images are allowed in comments per the policy, we will add comparison results in the final version as a reference. Although we have shown results in **Fig. III** of the rebuttal pdf for validating our pipeline design (where the background content is fixed, the shadings on the car and the shadows on the ground change according to the rotations of the given lighting, showing our method indeed **follows the extracted lighting condition to give lighting-realistic generation**, and removing lighting gives a piecewise-flat repainting image, showing our method indeed **inject the illumination into the generation process**), from the perspective of an image-level editing task, there is no need for rotating the lighting only, since the inconsistency of the background and the rotated lighting would not only confuse the network but also damage the lighting-realistic perception of humans, as **Fig. III** shows. Like the relevant single-image-based works [I, II, X], our method is not designed for the lighting rotation application where a lighting-unharmonized image is the desired output. Nevertheless, we will provide more evaluation results of this setting in the final version for further validation.
Summary: This paper tackles the task of illumination-aware conditional image repainting. Given an input image and a set of conditions, the proposed method aims to inpaint / re-generate a certain region based on the input conditions. This can be used to achieve functionalities such as object insertion and image composition. Compared to prior works, this paper is with the goal of injecting physics-based illumination information into the image generation process. In a high-level, instead of formulating this task as a simple image-to-image translation in 2D image space, this work aims to introduce explicit physics-based rendering in 3D into a 2D neural renderer. This can be achieved by incorporating physics-based rendering buffers. To enable training and evaluation of the method, the authors also curate a photorealistic synthetic dataset with material and lighting conditions. The results of the proposed method is qualitative visualized and quantitatively evaluated. A user study is included to compare the photorealism of the edited results. The proposed method can significantly outperform baselines wrt lighting effects. Strengths: In general I find this paper with a sufficient amount of workload and technically solid. Originality: - The task definition is well motivated. The analysis on why we need 3D information in conditional generation is generally informative and convincing. - The proposed method is sensible and novel. Despite a complicated pipeline, it presents a smart approach to inject physics-based rendering process into a 2D neural renderer. Quality: - The qualitative and quantitative results outperform baselines and achieves SOTA. Clarity: - This paper is well written and easy to follow. - The descriptions on method details in main paper and supp are thorough. Significance: - This paper proposes a carefully designed approach for illumination-aware image generation, which has not been extensively explored in recent generative models. - The proposed dataset can be beneficial for future research works. Weaknesses: In general I do not find critical concerns of this paper but have some questions to further elicit insights: - The proposed lighting representation is a slightly modified version of prior works. How does the parametric light representation (in Eq.9) compare to prior sky models [22, 23, 32, 63]? - The model is trained on synthetic data, which can be a concern when the ultimate goal is to apply on real-world imagery. How well does it work on real-world images, and how to measure the domain gap? - What is the core advantage of generative repainting compared to fully physics-based lighting estimation methods such as SOLID-Net? The motivation of conditional image repainting is still a relatively small scope. The authors could consider including discussion of these works in related works. For explicit lighting estimation, a line of work estimates 3D lighting volume: - Wang et al. Neural Light Field Estimation for Street Scenes with Differentiable Virtual Object Insertion - Li et al. Spatiotemporally Consistent HDR Indoor Lighting Estimation In a similar spirit to this paper, many works in relighting and neural rendering also combine neural modules with PBR. For example, - Philip et al. Multi-view Relighting using a Geometry-Aware Network - Pandey et al. Total Relighting: Learning to Relight Portraits for Background Replacement Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: Please see weaknesses section above. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 4 excellent Contribution: 4 excellent Limitations: The limitations and failure cases are discussed in paper and supp. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your very detailed review and suggestions, and here we give responses to the mentioned concerns. We are looking forward to discussing with you during the author-reviewer discussion period. ### Comparison of different lighting representations The parameters in **Eq. 9** share many common variables with prior works since these variables are bound with the same physical meanings ($l_\text{sun}$ for sun direction and $c_\text{sun}$ for sun mean color). However, the underlying modeling is differently designed with reasons. As a common point, the HW sky model [22, 23], LM sky model [32, 63], and our model all adopt the representations that separate the contribution of sun light and sky light in the environment lighting. The HW sky model is derived by modeling light from the sun going through the Earth's atmosphere, which is spectrum-related and somewhat complicated by taking atmospheric scattering into consideration. The LM sky model is derived by directly fitting the environment map of the sky dome (captured on the ground, that is to say, at the bottom of the atmosphere). The sun light part of the LM model is simplified as a double exponential fallout, while the sky light part is modeled using the analytic Preetham sky model [VI]. In our task, we need to represent environment lighting from a complete panoramic view ($360\degree\times180\degree$) and the sky might be occluded. However, the LM and HW models are used to model only the sky dome ($360\degree\times90\degree$) and therefore are not suitable. Since the ambient light may also be from the grounding, where the color variations are much more significant than the sky, we resort to spherical harmonic (SH) for low-frequency ambient light. For representing sun light, we have tried and found the double exponential falloff a little overkill for our lighting data and task requirement. Therefore, we use a simpler spherical Gaussian (SG) modeling for the high-frequency sun light (which is further approximated as directional lighting). The choice of our lighting representation allows us to render the illumination of the object surface with the Blinn-Phong model on the fly efficiently without conducting numerical integral on the surface hemisphere (**Eq. 15** and **Eq. 16** of the supplemental materials) and shallower gradient chain for possible end-to-end training in future work. [VI] A Practical Analytic Model for Daylight. Preetham *et al*. SIGGRAPH 1999. ### Real-world evaluations > How well does it work on real-world images, and how to measure the domain gap? We have shown a set of real-world examples in the supplementary material of the original submission, please refer to **Sec. I** of the global response. For domain gap measurement, the current evaluation protocol cannot be directly used on real-world data due to the lack of ground truth labels, one possibility is to collect a small real-world test set at an affordable cost and then perform the same quantitative evaluation as on the synthetic dataset. ### Comparison with SOLID-Net SOLID-Net is a physics-based lighting estimation method that combines the idea of intrinsic image decomposition and differentiable rendering. Their final output is spatially-varying lighting represented as panoramic environment maps. The repainting results of our methods can be achieved by SOLID-Net using virtual object insertion (VOI). However, even with known lighting, for a realistic VOI, a 3D model along with a properly set shadow catcher is needed, as done in the data collection pipeline of our work. Whereas for realistic repainting, our method only takes a parsing mask along with user-given attributes as input, which is more convenient and has flexibility allowing editing using different attributes. To sum up, our generative repainting pipeline relieves the demand for 3D models and expert skills in the application of virtual object insertion and gives the users more freedom to control generated content while keeping consistently realistic lighting effects. ### More citations needed As suggested, we will add more citations and discussions of works on 3D volumetric lighting estimation and neural PBR relighting. As for the mentioned papers on 3D volumetric lighting estimation, both papers use the additional depth maps as input, which help the learning of 3D volumetric lighting representation. Wang *et al*.'s paper also utilizes a learned sky dome lighting for long-distance global lighting in outdoor scenes and uses differentiable rendering and adversarial learning techniques to facilitate better lighting estimation for VOI. Li *et al*.'s paper utilizes the spatiotemporal consistent constraint in the video clips by using an RNN to blend the lighting volume estimation results from individual video frames. As for the mentioned papers on neural PBR relighting, Philip *et al*.'s paper uses a 3D proxy of the outdoor scene reconstructed by multi-view stereo to compute shadow masks and RGB shadow images at the source and target lighting conditions, then obtains refined shadow masks by the refining network, which are used to relighting new scene appearances in a neural rendering module along with illumination buffers from the 3D proxy as other inputs. Pandey *et al*.'s paper aims at relighting portraits given the target environment lighting by conducting intrinsic decomposition to get normal and albedo predictions and then using the given HDR environment map to render a diffuse light map and a set of specular light maps (which has a similar idea as our illumination candidate images, and we'll add a clear reference in the final version), at last, a shading network decides how to combine specular light maps and generates self-shadowing and specularities effects in the relit result. --- Rebuttal Comment 1.1: Comment: The authors rebuttal responded to my questions and concerns. After reading rebuttal and other reviews, I do agree with other reviewers on limitations of this work. I would encourage the authors to revise accordingly, such as to confront several lines of related work when introducing the task/context, modestly adjust / lower some corresponding claim, and discuss the limitation on the variety of objects in the dataset and scalability. The revision will not weaken the contribution of the paper, but provide more accurate position of this work in the literature. In general I do not find existing concerns critical and could be fixed by revision. I would remain my current rating. --- Reply to Comment 1.1.1: Title: Thanks for your comment Comment: We appreciate your acknowledgment of our efforts to address your concerns. We agree that current concerns can be fixed in the final version of our paper. As suggested, we will carefully revise our paper to confront related work comprehensively, adjust claims modestly, discuss dataset limitations and scalability, and incorporate more experimental results. These changes will enhance our paper's accuracy within the literature without weakening its contribution. Your support and recognition of our contribution are greatly encouraging to us.
Rebuttal 1: Rebuttal: We thank the reviewers for their valuable and constructive feedback on our paper, and we are looking forward to a more comprehensive discussion during the author-reviewer discussion period. We are glad and encouraged to see that the reviewers’ comments that our paper is “well-written and easy to follow” (**Reviewer Ex3g**) and “performs a thorough ablation study” (**Reviewer U8wa**) with “excellent” presentation (**Reviewers U8wa, Ex3g, and Kvqo**) and “a sufficient amount of workload” (**Reviewer Ex3g**); the idea is “interesting” (**Reviewers Xnyj, oBS5, and Kvqo**) and “sound” (**Reviewer Kvqo**); the proposed dataset is “more physically plausible than previous datasets” (**Reviewer Kvqo**), “beneficial for future research work” (**Reviewer Ex3g**) and “useful for illumination community” (**Reviewer oBS5**); and the proposed method is “technically sound” (**Reviewers U8wa and Ex3g**) and “well validated by the experiments and ablation studies” (**Reviewer Kvqo**). We are also aware that the reviewers have raised many concerns about our paper. Here we post responses for overlapped concerns in global response and individual concerns in dedicated responses for each reviewer, respectively. In the **attached pdf file**, we provide more experimental results as suggested by the reviewers: 1. Repainting results with rough masks (in **Fig. I**, as suggested by reviewer U8wa). 2. More ablation variants (in **Tab. I** and **Fig. II**, as suggested by reviewer Xnyj). 3. Separated evaluations for foreground and background regions (in **Tab. I**, as suggested by reviewer Xnyj). 4. Comparisons with image harmonization methods (in **Tab. I** and **Fig. II**, as suggested by reviewer Xnyj). 5. Illustrations of appearances as the lighting changes (in **Fig. III**, as suggested by reviewer Xnyj). In the following text, we address the common issues the reviewers raised: ### I. Real-world examples We have provided results of real-world examples in **Fig. 13** of the originally submitted supplemental material, where the background images are collected from other datasets [III, IV] and never seen in the whole training process. For other input conditions used in the in-the-wild test, the attributes are randomly generated, and the parsing masks are manually generated for each test image so that the repainted cars are in a suitable place with a proper size. The parsing masks are not hand-drawn but rendered from the 3D annotated car models. To further investigate how the masks influence our results, we provide the results of using rough masks in **Fig. I** of the attached pdf to show how casually-drawn masks would work for our method. ### II. Scalability of the proposed method In our paper, we only demonstrate the results of car repainting on outdoor scenes so far. This is not limited by our proposed method but by the feasibility of data collection. The core of our proposed method is injecting extracted lighting information as the illumination images, which would not be affected by changing scenes or object types. On the other hand, our method firmly relies on the learned relationships between semantics and object properties (such as shape, materials, and colors). Therefore, it cannot directly repaint objects with unseen semantics (for example, the semantic of “hairs” for persons does not correspond to any semantic of cars). If enough data can be collected for other object categories, the proposed method should work well. Despite that, we cannot rush a dataset with a qualified amount of data and semantic labeling for other object categories, given the limited period during the author's response. Since indoor and outdoor scenes have drastically different lightings, prior works often use different representations for indoor [15] and outdoor [21] scenes. Therefore, our proposed method trained on outdoor scenes basically cannot be directly used for inference on indoor scenes (there are only a few works on lighting estimation that work fine on both indoor scenes and outdoor scenes **simultaneously**, and all-scene lighting representation and estimation is still an open problem). For indoor scenes, our current lighting representation could work by only using the SH lighting part and fix $z_\text{vis}=0$. Other representations tailored for indoor scenes (such as SVSG) could be better and also remain compatible with our proposed method. ### References **In the pdf attachment:** [I] Transformer for Image Harmonization and Beyond. Guo *et al*. TPAMI 2022. [II] PCT-Net: Full Resolution Image Harmonization Using Pixel-Wise Color Transformations. Guerreiro *et al*. CVPR 2023. **In the response texts:** [III] Scalability in Perception for Autonomous Driving: Waymo Open Dataset. Sun *et al*. CVPR 2020. [IV] UASOL, A Large-Scale High-Resolution Outdoor Stereo Dataset. Bauer *et al*. Scientific Data 2019. [V] Neural Fields meet Explicit Geometric Representations for Inverse Rendering of Urban Scenes. Wang *et al*. CVPR 2023. [VI] A Practical Analytic Model for Daylight. Preetham *et al*. SIGGRAPH 1999. [VII] People as Scene Probes. Wang *et al*. ECCV 2020. [VIII] Repopulating Street Scenes. Wang *et al*. CVPR 2021. [IX] GeoSim: Realistic Video Simulation via Geometry-Aware Composition for Self-Driving. Chen *et al*. CVPR 2021. [X] Cut-and-Paste Object Insertion by Enabling Deep Image Prior for Reshading. Bhattad & Forsyth. 3DV 2022. [XI] CADSim: Robust and Scalable in-the-wild 3D Reconstruction for Controllable Sensor Simulation. Wang *et al*. CoRL 2022. Pdf: /pdf/e7be33c6f7c074042a1338af2e3ca97de53415f2.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: The paper proposed a learning-based method for conditional image inpainting. The method takes an image with a masked region and the conditioned attributes as input, and synthesizes a new image by filling the masked region. Previous works for this task usually fail to generate images with realistic shading effects such as specularity and shadows. The paper tackles this problem by involving geometry and lighting cues in the inpainting process. The paper estimates lighting information from the input image and renders shading images of the inpainting content using the estimated lighting, normal and a set of predefined roughness values. Conditioned on the shading images and the attributes, additional networks are trained to inpaint the foreground and background to generate photorealistic shading effects. The paper shows results on inpainting cars onto outdoor images and demonstrates better results than previous methods in terms of both FID score and user preference. Strengths: 1. By explicitly predicting lighting, normal and materials, and rendering the shading images, the proposed method can generate inpainting results that have realistic shading effects such as shadows and specularity, and outperforms basline methods. 2. The paper performs a thorough ablation study to validate the effectiveness of different design chocies. Weaknesses: 1. The paper only demonstrates results on a very limited scenario where cars are inpainted onto outdoor images. The designed lighting representation is also tailored for outdoor scenes. It's not clear whether the proposed method is scalable to other cases such as indoor images and more diverse kinds of inpainting content. 2. In terms of shadows, the paper only shows cast shadows on a flat surface. How does it perform when the cast shadows are on other objects such as walls? In the meantime, the method can only generate shadows cast by the inpainted content, and cannot produce shadows on the inpainted content cast by the background objects. For example, in Figure 13 Row 7 of the supplemental material, there are no shadows on the inpainted car. 3. In the results shown in the paper, all input masks follow a perfect car silhouette. I am wondering whether this is a requirement of the method. What if a rough mask that may extend to regions outside of the car is given? What would the normal/shading prediction look like? 4. The paper should make it clear that it is following previous works on the lighting representation and add clear references to them. For the illumination image, such a technique has also been used in previous works such as: * Deferred Neural Lighting: Free-viewpoint Relighting from Unstructured Photographs * Total Relighting: Learning to Relight Portraits for Background Replacement In addition, the paper is also related to inverse rendering tasks in computer graphics, and it would be good to add discussions on them: * Inverse Rendering for Complex Indoor Scenes: Shape, Spatially-Varying Lighting and SVBRDF from a Single Image 5. What does the normal mean in the text prompt? Overall, I think the proposed method is technically sound and the results are convincingly better than the baseline methods. I refrain from giving a higher rating due to concerns on the scalability of the method considering that it only shows results on car inpainting and requires a lot of manually annotated 3D data which may not be easily available for other subjects such as animals and other general objects. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: See weakness. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: The limitations look good to me. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your very detailed review and suggestions, and here we give responses to the mentioned concerns. We are looking forward to discussing with you during the author-reviewer discussion period. ### Imperfect parsing masks as input > What if a rough mask that may extend to regions outside of the car is given? What would the normal/shading prediction look like? We conduct an experiment of using disturbed parsing masks (the silhouette extends outside the car, and the internal structure is coarsened with random noise) as input, with the results shown in **Fig. I** in the attached pdf. In each case, the first row shows results for the accurate parsing mask, while the second row is for the disturbed mask. The other input conditions are kept the same. The results show that the normal and shading predictions for disturbed masks are still reasonable, as well as the final repainting results, which show the robustness of our proposed method to imperfect parsing masks. Our repainting formulation follows a pixel-wise correspondence between the semantics in the parsing masks and the repainting results. For example, in the right case, the door handles in the repainted image shrink following the disturbed mask. Therefore, better parsing masks generally lead to better repainting results. On the other hand, the repainting results may be negatively affected given severe mask degradation. ### Concerns on shadow generation > The paper only shows cast shadows on a flat surface. As stated in **Fig. 2** in the main paper and **Sec. 8.1** in the supplementary material of the original submission, the cars are inserted as foreground objects onto a flat “insertable” ground (*e.g.*, roads, grass, and dirt). The shadows of the inserted cars are rendered using a flat shadow catcher fitting the ground. More realistic shadows can be rendered by setting more sophisticated shadow catchers fitting all geometry in the background images, which is infeasible for a large amount of data with great diversities. Therefore, our shadow net (NetS) only learns to cast shadows on a flat surface with imperfectly rendered shadow labels. Nevertheless, a flat shadow should be reasonably good for open outdoor scenes. > The method can only generate shadows cast by the inpainted content, and cannot produce shadows on the inpainted content cast by the background objects. Unlike the cast shadow on the ground, the shadow on the object cast by another object is a highly challenging non-local secondary lighting effect and is very difficult for networks to directly predict by purely operating in the image space. For general outdoor scenes without additional priors, a full 3D reconstruction of the whole scene is needed to recover this type of non-local shadows [V], which usually need multiple view inputs. We will clearly mention such limitations about shadows in the final version. [V] Neural Fields meet Explicit Geometric Representations for Inverse Rendering of Urban Scenes. Wang *et al*. CVPR 2023. ### Scalability of the proposed method Please refer to **Sec. II** of the global response. ### More citations needed > The paper should make it clear that it is following previous works on the lighting representation and add clear references to them. As correctly pointed out, our idea of illumination candidate images is indeed similar to what has been proposed in the two mentioned papers. Thanks for suggesting these relevant works, and we will add clear references to those papers as suggested. > The paper is also related to inverse rendering tasks in computer graphics, and it would be good to add discussions on them. We will add more citations and discussions of inverse rendering works as suggested. As for the specific paper mentioned, Li *et al*.'s paper proposes a single-view method of lighting and BRDF estimation utilizing the learning-based priors on indoor scenes (which might capture scene properties such as Manhattan rooms, local smoothness and global sparsity of the materials, and spatially-consistency of local lighting) from a synthetic rendered dataset (which also require lots of computation, especially for the densely-labeled indoor spatially-varying lighting annotations). They use 12 SG lobes to represent indoor local lighting near the object surface, which generally works fine for indoor scenes. However, for outdoor scenes with extreme sun intensity, optimizing the ground truth SG parameters may be numerically unstable. Nevertheless, finding a proper lighting representation that works fine in both indoor and outdoor scenes with high efficiency to use is still an open problem. The commonly used framework of “decomposition-->rendering-->reconstruction loss back-propagation” is more challenging in general outdoor scenes at a single-view setting as the possible priors mentioned may not stand anymore, and it becomes less reliable for the monocular depth and normal estimation, which prohibits precise reconstruction of target image via rendering. In our image repainting setting, we have no accurate “target image” for minimizing reconstruction loss. Therefore, their method cannot directly apply to our task. In our proposed method, we only conduct a forward rendering process where the normal is estimated from the semantics of object categories (cars in our work specifically), and the need for depth is circumvented by the shadow network (NetS). Due to the incomplete 3D observation of the scene, Li *et al*.'s paper does not handle secondary lighting effects in forward rendering, which is also the limitation of our proposed method. ### Clarifications > What does the normal mean in the text prompt? Sorry for the ambiguity in wording. The “normal” in the text prompt is not related to the concept of “surface normal.” It is closer to the concept of “default values” (for instance, the middle darkness of the glass between light and dark).
null
null
null
null
null
null
Diffused Task-Agnostic Milestone Planner
Accept (poster)
Summary: This paper proposed a novel method to solve long-horizon, sparse-reward tasks and multi-task problems, which outperforms offline RL methods on many benchmarks. Strengths: - Provide an elegant and general idea to solve sparse-reward problem -- which is hard for RL-based methods. - The paper is well-written and easy to be understood. - Both theoretical proof and empirical details are provided, which make the claims in the paper persuasive. - Good results. Weaknesses: - How much do the sampling methods matter? In the paper, the number of sampled points are fixed. If the authors can provide more ablation results with varying number of sampling points or use other sampling methods, the results will be more instructive. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: Mentioned in weakness. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 4 excellent Contribution: 4 excellent Limitations: No obvious societal impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the valuable comments including the positive comment on the soundness of our paper and the suggestion of ablation study to make our manuscript more instructive. We present the responses to the reviewer's concerns and questions below. **Q. If the authors can provide more ablation results with varying number of sampling points or use other sampling methods, the results will be more instructive.** Thank you for your suggestion of ablation study on the number of sampled milestones to make the results more instructive. To reflect the reviewer's comments, we constructed additional experiments to determine how changes in the number of milestones affect performance. The results of the additional experiments are presented in the attached PDF file. The results show that using the small number of milestones leads to marginal degradation of performance. This is because the temporal distance between milestones increases as the number of milestones decreases, which results to increase the burden of the goal-conditioned actor to reach distant subgoals. On the other hand, if the number of milestones exceeds a certain threshold (30 milestones for antmaze-medium task), the change in the number does not affect performance of DTAMP. **Q. No obvious societal impact.** A. The proposed method can be applied to a wide range of robotic tasks that require long-term planning in our society, e.g. autonomous driving, indoor navigation. In addition, the advantages of our method of performing multiple tasks with a single agent can lead to cost savings in industrial robot development. --- Rebuttal Comment 1.1: Comment: Thanks for the detailed explanation! --- Reply to Comment 1.1.1: Comment: Thank you for acknowledging our contribution on proposing a method to solve long-horizon, sparse-reward problems. Your discussion was also helpful for conducting additional ablation studies which make our paper more instructive.
Summary: This paper extends diffusion-based latent milestone planners to long-term planning, vision control, and multi-task settings. Specifically, an encoder is trained to project observations into the latent space. The authors employ goal-conditioned imitation learning to train both the encoder and the prior goal-conditioned policy. After that, a diffusion probabilistic model is trained to model the milestone trajectories. In order to generate the shortest path leading to the goal state more efficiently, the authors take into account the temporal distance between consecutive milestones in addition to the initial state and goal state. At the low level, the prior goal-conditioned policy is used to sample atomic actions, guiding the agent toward the given milestone. Experiments conducted on selected tasks from the Deep RL for Real Life (D4RL) and CALVIN benchmarks demonstrate superior performance over the baseline models. Strengths: This paper is well-organized. They proposed shortest-path guidance method is effective. Weaknesses: - I think the current experiments are not enough. Could the authors also provide results on the D4RL MuJoCo locomotion tasks? It would strengthen the paper if better performance could also be achieved there. - I'm wondering why the PointMaze tasks were not selected for comparison with the results from the Diffuser[1]. - Can the authors ablate on the choice of image encoder in the image-based planning to verify that joint train a goal-conditioned actor and critic is necessary? [1]: Michael Janner, Yilun Du, Joshua Tenenbaum, and Sergey Levine. Planning with diffusion for flexible behavior synthesis. In Proceedings of the International Conference on Machine Learning, Baltimore, US, Jul 2022. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: - Why are there only three different goals in the multi-goal setting? Isn't the initial state and goal state assigned manually and the goal-conditioned plans are sampled with inpainting? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 3 good Contribution: 3 good Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the valuable comments including acknowledging our contribution on proposing an effective method to generate shortest path and suggestions to strengthen our paper. We present the responses to the reviewer's concerns and questions below. **Q. Could the authors also provide results on the D4RL MuJoCo locomotion tasks?** A. Thank you for your suggestion to improve the impact of our work. To reflect the reviewer's comment, we conducted further experiments on the mentioned tasks by modifying the diffusion guidance method to maximize the sum of rewards rather than to minimize temporal distance between milestones. The attached PDF file presents the performance of DTAMP evaluated in the D4RL locomotion tasks, which achieves a marginally higher average score compared to the baselines. We would like to note that the main purpose of our approach is to address long-horizon, sparse-reward problems and image-based tasks which the existing diffusion-based sequence modeling methods (Diffuser and Decision Diffuser) cannot handle. However, the D4RL locomotion tasks provide dense rewards and can be performed by planning relatively short trajectories compared to antmaze tasks (to predict more than 100 timesteps forward does not affect much on the performance on the locomotion tasks). This explains why DTAMP does not show a more significant performance improvement over Decision Diffuser in these tasks. Meanwhile, we would like to emphasize that the greater contribution of our paper is to broaden the field where generative flexibility of diffusion models can be exploited, rather than performing better in the problems already covered by the existing diffusion-based models. **Q. I'm wondering why the PointMaze tasks were not selected for comparison with the results from the Diffuser.** A. We have conducted experiment on a Pointmaze environment (U-maze env.) and presented the result in our supplementary material (see Section H of the supplementary material). In this experiment, we added three different levels of stochasticity to the system dynamics to demonstrate DTAMP's robustness against environment stochasticity. As a result, DTAMP achieved the score close to the maxium possible value at all three levels of stochasticity. The reason why we did not further evaluated the proposed method on the other variations of Pointmaze environment (medium and large maze) is that we believe that the experimental results on Antmaze tasks (in Table 2 and Table 3 of our paper) are sufficient to show that DTAMP can plan trajectories for various goal positions in maze of various sizes. This is because the Pointmaze tasks can be solved much more easily than the Antmaze tasks as the Pointmaze tasks take less timesteps to reach their goals and have a smaller action space, making point robots easier to control than ant robots. We would like to further mention that the papers of Contrastive RL [1], Decision Diffuser [2], and Trajectory Transformer [3] also do not provide the performance of their algorithms on the mentioned environments. This fact makes it challenging to fairly compare DTAMP with baselines on Pointmaze environments, as the environment specific setting of hyper-parameters might be needed to accurately evaluate the baseline methods. [1] Eysenbach et. al., "Contrastive Learning as a Goal-Conditioned Reinforcement Learning", NeurIPS 2022 [2] Ajay et. al., "Is Conditional Generative Modeling All You Need for Decision-Making?", ICLR 2023 [3] Janner et. al., "Offline Reinforcement Learning as One Big Sequence Modeling Problem", NeurIPS 2021 **Q. Can the authors ablate on the choice of image encoder in the image-based planning?** A. Thank you for your suggestion to clarify our contribution on proposing a method to learn latent milestone representation. To reflect the reviewer's comment, we conducted an additional experiment of DTAMP using a variational autoencoder (VAE) as an image encoder instead of training the encoder jointly with actor and critic. The result of the ablation study is presented in the attached PDF file. Our empirical analysis shows that the representations learned by VAE cannot capture dynamical distances (distance in terms of how far apart are states in timesteps), which results in prediction of infeasible trajectories by DTAMP. **Q. Why are there only three different goals in the multi-goal setting?** A. We found that increasing the number of different goals increases the variance of success rates and requires more rollouts for accurate evaluation. Especially, computational burden of Trajectory Transformer makes it difficult to simulate a sufficient number of rollouts (about an hour per episode). Therefore, we selected three goals that can include as diverse paths as possible on the maze. --- Rebuttal Comment 1.1: Title: Response Comment: Thank the authors for the additional experiments! I'm satisfied with the results, and I'm happy to increase my score. --- Reply to Comment 1.1.1: Comment: Thank you for acknowledging that the proposed method can also handle general reinforcement learning problems. We also thank you for the valuable comments which were helpful for conducting additional ablation studies. --- Rebuttal 2: Title: Additional Experiments Comment: Dear Reviewer, Please look at the authors' rebuttal, especially with regard to additional experiments, and see whether that would change your rating. Thanks, Your AC
Summary: This paper introduces a hierarchical architecture named DTAMP to solve sequential decision-making problems. Specifically, the high-level part of the architecture is realized by a diffusion model to decompose the long-term goal into several short-term milestones. Then, the low-level part makes basic decisions at fixed intervals until the milestone is reached, and the policy is trained by goal-conditioned imitation learning. The results show that the proposed approach can achieve state-of-the-art performance on D4RL and CALVIN benchmarks. Strengths: 1. The method combines the diffusion model with hierarchical reinforcement learning, which can achieve state-of-the-art performance on multiple benchmarks. 2. The intermediate latent goals, i.e., milestones obtained by the proposed approach, can be represented not only by traditional discrete labels but also by images, as shown in Figure 4. Weaknesses: 1. The biggest concern would be that the training process is not clearly explained. The authors do not state whether they trained the diffusion model and imitation learning separately or synchronously. If they're trained separately, how to obtain the labeled data of milestones? The authors should supplement enough information. 2. In Table I, the target interval value denotes the target temporal distance between milestones, which is set to be relatively smaller than the maximum value. Then, there is no restriction on the target interval value when the ratio is 1.0, which should be the same as in the unconditioned experiment. The authors should make a further clarification on the increased timesteps when the ratio is 1.0. 3. The structural relationship of Figure 1. is quite confusing. According to my understanding, the diffuser (as shown in the right subfigure) should be performed first, and then the actor and critic (as shown in the left subfigure) should be performed to obtain the trajectory. It is recommended to arrange the subfigures from left to right and establish the relationship between the two sub-figures. 4. There are some minor mistakes in this paper, such as typos in Line 122, 176, and 'inference time' in Figure 3. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: 1. The authors state that the trained actor is limited in inferring a suitable action for a distant goal if the offline data does not contain a trajectory connecting the current state and the goal state at the end of Section 3.1. Then, when the amount of data decreases, will the performance of the proposed approach decrease significantly? 2. In this paper, the number of planned intervals K is a fixed value. Will different values of K have a significant impact on performance, and is there a situation where the milestone cannot be reached after K intervals? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: See above. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for valuable comments including summary of our contributions and pointing out some of our explanations that was not clear enough. We present the responses to the reviewer's concerns and questions below. **Q. The biggest concern would be that the training process is not clearly explained.** A. We trained the diffusion model and imitation learning synchronously by adding up the losses as, $J_\text{unified}=J_\text{actor} + J_\text{critic} + \alpha J_{diffusion}$, where the coefficient $\alpha$ is fixed to 0.001 for all the experiments. We would like to mention that it is presented in Section E (Training details) of our supplementary material. **Q. The authors should make a further clarification on the increased timesteps when the ratio is 1.0.** A. We would like to note that setting a ratio of 1.0 leads to planning the longest path by making the temporal distance between milestones the maximum value $\triangle_\text{max}$. On the other hand, in the case of planning without condition of target temporal distance, the generated path can have various lengths from the shortest length to the longest length. As a result, the trajectories planned under the condition of $\triangle_\text{target}/\triangle_\text{max}=1$ shows longer average path-length than the trajectories planned unconditionally. We will clarify how the setting of target temporal distance affects the length of generated trajectory in the revised version of our manuscript, to avoid the possible confusion. **Q. The structural relationship of Figure 1. is quite confusing.** A. We wanted to illustrate the training process of DTAMP in Figure 1. We thought that we should explain how to learn milestone representations first, and then explain how to train the diffusion model to plan the learned milestones. However, we understand the confusion and we will rearrange and revise the figure for the final version of our paper to more clearly describe how the proposed method works. **Q. There are some minor mistakes in this paper, such as typos in Line 122, 176, and 'inference time' in Figure 3.** A. We appreciate for pointing out the typos we missed. It will help a lot to revise our manuscript. **Q. When the amount of data decreases, will the performance of the proposed approach decrease significantly?** A. No, the statement at the end of Section 3.1 does not mean that the performance of DTAMP will decrease significantly when the amount of data decreases. It means that even if the trained actor may suffer to predict a suitable action for a distant goal, the proposed diffusion-based milestone planner provides milestones that can guide the actor, so that enables the actor to eventually reach the distant goal by following the relatively close milestones. We understand the confusion and will clarify how the proposed milestone planner enhances sample efficiency for the long-horizon, sparse-reward tasks in the revised version of our manuscript. **Q. Will different values of K have a significant impact on performance, and is there a situation where the milestone cannot be reached after K intervals?** A. We agree that there is a lack of explanation for how changes in the number of milestones affect performance. To answer to the reviewer's question, we constructed additional experiments to determine how changes in the number of milestones affect performance. The results of the additional experiments are presented in the attached PDF file. The results show that using the small number of milestones leads to marginal degradation of performance. This is because the temporal distance between milestones increases as the number of milestones decreases, which results to increase the burden of the goal-conditioned actor to reach distant subgoals. On the other hand, if the number of milestones exceeds a certain threshold (30 milestones for antmaze-medium task), the change in the number does not affect performance of DTAMP. From this empirical analysis, we can conclude the issue mentioned by the reviewer can be resolved by setting the sufficient number of milestones that covers the time horizon of the targeted environment.
Summary: This paper uses sequence modeling method in applications like long-term planning, vision-based control, and multi-task decision-making. They formulate a novel method which uses diffusion-based generative sequence model to plan a series of milestones in a latent space and to have an agent to follow the milestones to get the task done. Their method can learn control-relevant, low-dimensional latent representations of milestones that makes it possible to efficiently perform long-term planning and vision-based control. They train an encoder to extract milestone representations from observations, and an actor to predict actions to follow the milestones, using a goal-conditioned imitation learning fashion. Their method uses a denosing diffusion model to generate an array of milestones conditioned on a target state. The minimum temporal distance diffusion guidance method is used in this paper to make the proposed method plan the shortest path to reach the target state. Strengths: 1) The encoder trained using the method proposed, encodes control-relevant features into unique latent representations, which lets efficient planning even if the observations are high-dimensional. 2) The diffusion-based planner enables flexible planning for a variety of tasks and takes advantage on multi-task decision-making problems. 3) Using the trained actor as a feedback controller makes an agent robust against the environment stochasticity, and makes it possible to plan milestones only once at the beginning of an episode which largely reduces inference time compared to the existing sequence modelling methods. DTAMP has lower inference time compared to other baselines. 4) DTAMP outperforms the baseline methods on the every environment. Weaknesses: 1) Language of the paper could be improved. Some of the words are repetitive and there are quite some typos. for eg: "letting the the diffusion " and many other - at other places. 2) DTAMP: the ablation study on how the design of diffusion model was done is missing. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1) Did the authors re-implement all the baselines mentioned in Table2 for the comparison? 2) why and how did authors take the decisions to built the diffusion model how they did? Confidence: 1: Your assessment is an educated guess. The submission is not in your area or the submission was difficult to understand. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: 1) The proposed method might not work on the goal-reaching tasks and authors have not used it for the task not having goal reaching objective. 2) Their method cannot solve a new task not included in dataset. Authors can also elaborate on societal impact of their work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the valuable comments including summary of our contributions and pointing out typos we missed. We present the responses to the reviewer's concerns and questions below. **Q. Language of the paper could be improved.** A. Thank you for your valuable comment for improving the quality of our paper, and we will refer to it for revision. **Q. Did the authors re-implement all the baselines mentioned in Table2 for the comparison?** A. We did not re-implement the baseline algorithms. The performances of baselines are mostly taken from their original paper, except we evaluated Diffuser, Decision Diffuser and Trajectory Transformer on Antmaze environments using the code provided by the authors of their papers. For a more detailed description of the source of baseline performance, please refer to Section C (Source of baseline performance) of the supplementary material. **Q. Why and how did authors take the decisions to built the diffusion model how they did?** A. We used a network architecture of the same diffusion model as Diffuser and Decision Diffuser for fair comparison. We made this decision to verify that our approach can broaden the area of application of diffusion-based sequence modeling methods without changing the architecture of the diffusion model. For a more detailed description of implementation details, please refer to Section D (Implementation details) of the supplementary material. **Q. The proposed method might not work on the task not having goal reaching objective.** A. We conducted further experiments on the D4RL locomotion tasks (Halfcheetah, Hopper, and Walker2d) which provide dense rewards and should be handled by maximizing "sum-of-trajectory-rewards" objective. The additional experiments were done by modifying the diffusion guidance method to maximize the sum of rewards rather than to minimize temporal distance between milestones. The attached PDF file presents the performance of DTAMP evaluated in the D4RL locomotion tasks, which achieves a marginally higher average score compared to the baselines. This result indicates that DTAMP can also handle the general reinforcement learning problems as well with a small modification. **Q. Authors can also elaborate on societal impact of their work.** A. The proposed method can be applied to a wide range of robotic tasks that require long-term planning in our society, e.g. autonomous driving, indoor navigation. In addition, the advantages of our method of performing multiple tasks with a single agent can lead to cost savings in industrial robot development. --- Rebuttal Comment 1.1: Comment: Thank you for the answers. I am happy to keep my current rating. --- Reply to Comment 1.1.1: Comment: Thank you for acknowledging our contribution on developing diffusion-based planning that enables flexible planning and takes advantage on multi-task decision-making problems. Also, your comments were really helpful for revising our paper.
Rebuttal 1: Rebuttal: Dear all the reviewers. We thank the reviewers for acknowledging the contributions of our work and for making constructive comments to improve the submitted manuscript. We are pleased that reviewers find out the major strengths of the proposed method, which can be summarized as follows: * Planning trajectories in the learned latent space using a diffusion model is interesting and promising way to solve partially observable problems. * The proposed diffusion guidance method to predict the shortest path to a given goal is effective and neat. * The proposed framework to use a goal-conditioned actor makes an agent robust against the environment stochasticity and largely reduces inference time. * The proposed method shows state-of-the-art performance on the long-horizon, sparse-reward tasks and an image-based, multi-task benchmark. * The paper is well-written and easy-to-understand. We also find that the majority of the reviewers' comments can be summarized in two folds: * Additional experiments would make the paper more instructive (demonstration on the locomotion tasks, ablation studies to investigate how the number of milestones affects the performance, etc.). - We constructed several additional experiments to reflect the reviewers' suggestions, and present the results in the attached PDF file. We hope that the extended empirical analysis will enrich our paper and resolve the reviewers' concerns. * There are a few unclear explanations and typos that need to be revised. - We are sincerely appreciate for pointing out the shortcomings of our presentation. We will actively reflect the advice for revising the manuscript. Finally, we thank again the reviewers for their valuable reviews and hope that our responses will be sufficient answers to their concerns and questions. Pdf: /pdf/02959bbb1512ad84f53732c548298aa4f8ce1975.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: The paper studies the problem of latent-space planning for sequential decision-making tasks using generative diffusion models. Similarly to works such as Diffuser, they train an endpoint-conditionned generative model to generate sub-goals along a path starting at a given state and reaching a certain goal. Differently to Diffuser, they perform the generation in a latent space pre-trained with imitation learning. The method achieves good results on D4RL environments as well as on a pixel-based planning task. Strengths: * The paper is well-written and, for someone who is not expert in planning or diffusion models, can be very easy to understand. * The proposed approach makes sense in the current context, i.e. planning to visit subgoals in long-horizon tasks. I appreciate the approach and specifically the contribution related to this line of work, as it reduces advances in the field of planning and/or control to advances in the field of generative models (i.e. better diffusion models -> better goal trajectories -> better performance to some extent). * Experimental results show that the proposed method has competitive performance with baselines and latest SOTA, both on state-based and pixel-based domains. Weaknesses: * I am not sure how exactly the approach differs from Diffuser (Janner et al, 2022), at least in the second phase when generating subgoals? From what I understand, the difference is in the first phase, where a latent space with desirable features for control is learned. * Based on Figure 4 of the Decision Diffuser paper, the method achieves high returns on the D4RL locomotion task, while results from your Table 2 paper seem to indicate otherwise. Can you explain this discrepancy in results? * I am curious how the method performs against recent generative-model-based model-free algorithms such as the contrastive value estimation line of work (Contrastive Learning As a Reinforcement Learning Algorithm by Eysenbach et al. 2023 and Contrastive Value Learning: Implicit Models for Simple Offline RL by Mazoure et al. 2023)? Technical Quality: 3 good Clarity: 3 good Questions for Authors: * Can you elaborate on the distinction of your DTAMP method vs the Diffuser algorithm? * If possible, can you show a comparison against the method from “Contrastive Learning As a Reinforcement Learning Algorithm”? * Can you elaborate on the discrepancy in results between Decision Diffuser paper and your Table 2? * How was the noise schedule chosen for various experiments? If you perform state-based denoising, even in a latent space, I would imagine its structure to be somewhat different to the latent space of a pixel-based tasks. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Main limitation is that the empirical contribution and evaluation is focused on goal-conditionned tasks, which reduces to maximizing the 0-1 sparse reward function, as opposed to the general “sum-of-trajectory-rewards” objective. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the valuable reviews including the positive comment on legibility. We present the responses to the reviewer's concerns and questions below. **Q. Can you elaborate on the distinction of your DTAMP method vs the Diffuser algorithm?** A. The main distinctive features of DTAMP compared against Diffuser are learning latent representation of subgoals (we named it milestones) and generating temporally sparse milestones instead of predicting the every states and actions connecting the current state and the goal. It is an intuitive idea and easy to be built on an implementation of Diffuser, while providing large benefits in terms of reducing computational costs and making it possible to perform long-horizon and partially observable tasks which Diffuser cannot handle. Furthermore, proposing a diffusion guidance method to ensure the planned trajectory to be the shortest path to the goal is another contribution of our work, while Diffuser does not guarantee the planned trajectory to be the shortest. **Q. Can you show a comparison against the method from “Contrastive Learning As a Reinforcement Learning Algorithm”?** A. We already compared DTAMP against the method from the mentioned paper [1]. The algorithm is indicated by "ContRL" in Table 2 and Table 3 of our manuscript. While ContRL also utilizes a bootstrapping-free method to train a critic function, it shows limited performance in long-horizon tasks such as Antmaze environments. On the other hand, our approach to generate a series of milestones divides a long-term problem into short-term problems and let an agent solve them more easily without using the bootstrapping method. [1] Eysenbach et. al., "Contrastive Learning as a Goal-Conditioned Reinforcement Learning", NeurIPS 2022 **Q. Can you elaborate on the discrepancy in results between Decision Diffuser paper and your Table 2?** A. The performance of the Decision Diffuser shown in Figure 4 of its paper [2] was evaluated on D4RL locomotion tasks (halfcheetah, hopper, and walker2d), and the results in Table 2 of our paper were evaluated on D4RL antmaze tasks. The D4RL locomotion tasks provide dense rewards (depending on the robot's velocity and posture at each time step), while the antmaze tasks provide a reward of +1 only when the robot reaches the goal. When dense rewards are provided, an agent can achieve high performance by planning relatively short trajectories (planning trajectories of about 100 timesteps is sufficient to accomplish the locomotion tasks), while antmaze tasks require an agent to plan trajectories of longer than 500 timesteps. As a result, Decision Diffuser exhibits poor performance in antmaze tasks due to its limited ability to predict long-term trajectories. [2] Ajay et. al., "Is Conditional Generative Modeling All You Need for Decision-Making?", ICLR 2023 **Q. How was the noise schedule chosen for various experiments?** A. We utilized the same noise schedule (cosine schedule) used by Diffuser and Decision Diffuser for fair comparison, and the same scheduling method was used for demonstrating DTAMP on image-based environments. It is worth mentioning that the output of the encoder is normalized in our algorithm, so that the signal-to-noise ratio is consistent across various domains, which reduces the burden of choosing a different scheduling method for each different environment. We agree that there was a lack of explanation about how we designed the diffusion model, and we will include it in the final version of our paper. **Q. Main limitation is that the empirical contribution and evaluation is focused on goal-conditioned tasks.** A. We conducted further experiments on the D4RL locomotion tasks (Halfcheetah, Hopper, and Walker2d) which provide dense rewards and should be handled by maximizing "sum-of-trajectory-rewards" objective. The additional experiments were done by modifying the diffusion guidance method to maximize the sum of rewards rather than to minimize temporal distance between milestones. The attached PDF file presents the performance of DTAMP evaluated in the D4RL locomotion tasks, which achieves a marginally higher average score compared to the baselines. This result indicates that DTAMP can also handle the general reinforcement learning problems as well with a small modification.
Summary: The paper proposes uses a diffusion-based generative model to plan a sequence of milestones in a latent space and have the agent follow this latent plan to accomplish a given task. The authors show results on AntMaze, and more importantly on the CALVIN Benchmark to show the effectiveness of their method for image-based planning. Strengths: - The direction of using diffusion-model based generative models for planning is definitely quite interesting. I also agree with the authors that generating the plan in a learned latent space is a promising direction. - I also thought that the use of classifier-free diffusion guidance to encourage the trajectory follow the shortest path was pretty neat. - I appreciate the authors presenting experiments on a partially observed environment like CALVIN! Showing that their method can come up with a reasonable latent plan for partially observed environment is promising. - I also liked that the authors tried visualising the learned latent space. (Fig 2. in Supp). The visualisation shows how the proposed approach, can plan successfully in the learned latent space to sequentially accomplish multiple tasks in the environment. Weaknesses: - In L175, the authors mention that their approaches can only do the diffusion step at the beginning once, compared to existing methods which have to predict future trajectories at every step. Can the authors describe how can the policy recover from failure to reach a `milestone' in their case? If the agent's current state and the desired state is never closer than the given threshold, then according to the Alg. 1, the agent will never recover from this. - Can the authors comment on the performance of their approach will work when both the environment, and the state space is more complex and partially observed? Something that comes to mind is ImageNav in indoor environments [1] [2]. These environment seem more challenging since trajectories are often longer, and predicting intermediate milestones is much harder than CALVIN like benchmarks. - For Robustness against environment stochasticity experiments in the supplementary, isn’t the comparison to DD, and Diffuser a bit unfair? The experiments compare to DD and Diffusion by only allowing them to only predict the whole trajectory at once. But inference time related issues aside, DD and Diffuser model which predict the future trajectory at each timestep should be more robust to stochasticity in the environment than the methods which predict the whole trajectory at the beginning. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - For an exhaustive one-to-one comparisons with existing methods, it’d be good to show experiments for HalfCheetah, Hopper, and Walker 2D as done in both Janner et al, and Ajay et al. Additionally, these aforementioned papers also include block stacking experiments which aren’t included in this manuscript. - I only partially understood Fig 2 in Supplementary. How did the authors generate corresponding images for certain milestones (X) in the planned trajectory? Are these images corresponding to the state the agent reached when it was "closest" to the milestone? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: NA Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for valuable comments, including acknowledging the contribution of our work and pointing out unclear explanations. We present the responses to the reviewer's concerns and questions below. **Q. Can the authors describe how can the policy recover from failure to reach a `milestone' in their case?** A. To avoid the mentioned issue, we set a time limit and let an agent to move on to the next milestone when it fails to reach the current one within the time limit. We found that the performance of DTAMP is not significantly affected by the small changes in the setting of time-limit and setting the time limit of about twice the maximum interval $\triangle_{max}$ was sufficient for our experiments. We agree that there was a lack of explanation, and we will include it in the final version of our paper. **Q. Can the authors comment on the performance of their approach will work when both the environment, and the state space is more complex and partially observed? (e.g. ImageNav)?** A. Experimental results in antmaze environments show that DTAMP can handle long-horizon tasks that require an agent to plan paths of up to a thousand of timesteps. Therefore, DTAMP would be able to successfully plan the trajectories for the environments that have even longer time horizon than CALVIN environment. In our opinion, learning the milestone representation of images obtained from changing camera poses will be a major challenge for applying DTAMP to visual navigation domain such as ImageNav environment. Meanwhile, there are existing studies proposing promising ways to learn latent representation that capture 3D scenes by encoding radiance fields and utilize it for indoor navigation [1][2]. Combined with these methods, DTAMP will also be able to address the challenging visual navigation problems as well. [1] Bautista et. al., "GAUDI: A Neural Architect for Immersive 3D Scene Generation", NeurIPS 2022 [2] Kwon et. al., "Renderable Neural Radience Map for Visual Navigation", CVPR 2023 **Q. For Robustness against environment stochasticity experiments in the supplementary, isn’t the comparison to DD, and Diffuser a bit unfair?** A. We would like to note that the experiment of Table 1 in our supplementary material was performed to explain why DTAMP allows an agent to predict the trajectory only once whereas Diffuser and DD cannot. To that end, we think the performed experiment sufficiently supports our claim: the learned goal-conditioned actor can adapt to stochastic transitions and allows us to perform the time-consuming denoising process only once at the beginning of an episode (in Section 3.4 L174). In addition, the result in Table 2 in our supplementary material demonstrates that allowing DTAMP to replan milestones during an episode makes it more robust to stochasticity and enhances its performance. However, we understand the confusion and will clarify the purpose of the experiment the revised version of our manuscript. **Q. For an exhaustive one-to-one comparisons with existing methods, it’d be good to show experiments for HalfCheetah, Hopper, and Walker2D.** A. Thank you for your suggestion to improve the impact of our work. To reflect the reviewer's comment, we conducted further experiments on the mentioned tasks by modifying the diffusion guidance method to maximize the sum of rewards rather than to minimize temporal distance between milestones. The attached PDF file presents the performance of DTAMP evaluated in the D4RL locomotion tasks, which achieves a marginally higher average score compared to the baselines. We would like to note that the main purpose of our approach is to address long-horizon, sparse-reward problems and image-based tasks which the existing diffusion-based sequence modeling methods (Diffuser and Decision Diffuser) cannot handle. However, the D4RL locomotion tasks provide dense rewards and can be performed by planning relatively short trajectories compared to antmaze tasks (to predict more than 100 timesteps forward does not affect much on the performance on the locomotion tasks). This explains why DTAMP does not show a more significant performance improvement over Decision Diffuser in these tasks. Meanwhile, we would like to emphasize that the greater contribution of our paper is to broaden the field where generative flexibility of diffusion models can be exploited, rather than performing better in the problems already covered by the existing diffusion-based models. **Q. How did the authors generate corresponding images for certain milestones (X) in the planned trajectory?** A. To visualize each milestone, we selected a state in the "kitchen-mixed-v0" dataset, which is the closest to each milestone in the latent space. We thank the reviewer for pointing out unclear explanation, and we will refer to it for revising the paper. --- Rebuttal Comment 1.1: Title: Thank you for your responses! Comment: Thank you for responding to my concerns. I am satisfied by the answers and I am happy to keep my rating. --- Reply to Comment 1.1.1: Comment: Thank you for acknowledging our contribution on proposing a diffusion model based generative model which plans trajectory in a latent space. We also thank you for your valuable discussion which will help us for advancing our work in the future.
null
null
null
null
SING: A Plug-and-Play DNN Learning Technique
Reject
Summary: The paper proposes SING, a simple gradient preprocessing technique that, combined with any optimizer of choice, argues for improved stability and generalization. The paper further provides a theoretical convergence analysis of the approach Strengths: 1. The paper is clear 2. The technique is simple and easy to implement. Weaknesses: Overall the paper proposes a straight forward extension to the gradient centralization method, where the gradients are also normalized in a pointwise fashion as in other adaptive techniques. The main weaknesses of the paper are: 1. Incremental - i do not think the contribution of this paper merits publication due to its incremental nature. Adaptive optimizers already normalize gradients in a similar way and it is not clear what is added in the proposed method. 2. Non-convincing experiments - Only 1 experiment compares gradient centralization + AdamW (GC + AdamW), which was proposed in [1], which is the closest method to the one proposed in the paper. By that experiment GC + AdamW already achieves approximately the same performance, hence stripping SING from any practical significance. I do not understand why GC + AdamW is not used as a baseline for other experiments as it clearly shows strong performance. At it stands, it is not clear whether the apparent improvement of SING stems from the GC part of SING, or the added normalization which constitutes it novelty. I would encourage the authors to add this ablation study to the paper to make it more convincing. Finally, results in the paper do not include standard deviation which is be a must for an empirical paper. 3. The theory can be equally applied to other adaptive optimizers, hence it is not special to SING [1] - Yong et al - "Gradient Centralization: A New Optimization Technique for Deep Neural Networks" Technical Quality: 3 good Clarity: 3 good Questions for Authors: I would appreciate if the authors could clarify what is the motivation to normalize the gradients element-wise when this is already done in any adaptive technique (in various forms that might not match SING exactly) Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: Limitations are adequately addressed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: First off, we would like to thank the reviewer for their time and feedback. We hope our answers will help clear up some misunderstandings. 1) **About the incremental aspect and the difference with adaptive techniques** The novelty of this paper comes from a novel combination of existing ideas, the theoretical analysis, and the demonstration of the positive impact of the combination on the results. Notably, in the ablation study of Table 2, we show that each individual previous method (GC, normalization, LookAhead, Softplus) is outperformed by a significant margin by the proposed combination. We respectfully disagree with the reviewer about the fact that adaptive optimizers already normalize the gradient in a similar way. In fact, adaptive optimizers such as Adam(W) normalize the gradient element-wise, and the normalization factor is computed via a temporal average. On the other hand, SING normalizes the gradient layer-wise with an instantaneous normalization factor. This is fairly important because, as pointed out in point 3) of the general comment, the temporal mean computed in Adam(W) cannot filter out a sudden increase in the gradient magnitude. This incapability is the cause of the instability observed in Figure 4 of the paper. On the other hand, the normalization operated in SING ensures the gradient magnitude remains constant layer-wise, preventing any such kind of explosion. Furthermore, the normalization of SING is compatible and can work hand-in-hand with adaptive methods such as Adam. Indeed, SING normalizes the gradient layer-wise, and Adam does so element-wise. Therefore, if one weight of a given layer has a small magnitude compared to the others, SING won’t be able to increase its value, but Adam will. This explains the success of the mixture of AdamW + SING, as pointed out in Tables 1, 2, 3, and 4. Reviewer JvsR also raised questions about the interplay between SING and AdamW. Please see also our answer (3) to reviewer JvsR, where we proposed to add a discussion about this to the paper. 2) **Non-convincing experiments** We believe that the reviewer might have misunderstood the results presented in Tables 1 and 2 and would like to respectfully correct the following claims made in his review: *"Only 1 experiment compares gradient centralization + AdamW (GC + AdamW)."* \ Two experiments compare AdamW+SING with AdamW+GC: Table 1 (ImageNet) and Table 2 (Ablation study on the Rectangle Depth Estimation dataset) *"By that experiment, GC + AdamW already achieves approximately the same performance, hence stripping SING from any practical significance. I do not understand why GC + AdamW is not used as a baseline for other experiments as it clearly shows strong performance."* \ Based on the results of Tables 1 and 2, AdamW+CG has systematically worse performance than AdamW, and even more when compared against AdamW+SING. The results in the ablation study of Table 2 suggest that normalization is the key factor making it work and not GC. In more detail, simply adding the normalization to AdamW improves the accuracy by +14.87%, whereas adding GC to AdamW reduces it by -0.67%. From this +14.87% of improvement of SING, GC only contributes to 2.70% (*i.e.* SING with only normalization and without GC still improves AdamW by 12.17%). Based on our experience, we found that GC is mainly useful to allow for a larger learning rate, therefore escaping more local minima, as per Theorem 3.1. These results lead to the conclusion that SING cannot be interpreted as a *“straightforward extension of the gradient centralization paper”*. 3) **About standard deviations** We agree with the reviewer. We did not compute standard deviations on ImageNet mainly due to our limited access to computational resources. However, please note that we did include standard deviations on CIFAR 100 in the supplementary material. We also want to stress that the results reported in the paper are not cherry-picked. Except for CIFAR100, once we defined our methodology for hyper-parameter tuning, we ran each training once and reported the results obtained. This reduces the chances of bias in the results. In addition, we added in Table 1 of the PDF accompanying this rebuttal the standard deviations for the first line of Table 2, where it can be seen that the trainings with AdamW always explode whereas SING is always stable, which is the main claim of the paper. We will add these standard deviations (and the ones for the rest of Table 2) in the final version of the paper. 4) **The theory could be applied to other adaptive optimizers** We respectfully disagree with the reviewer on this point as well. We insist that the layer-wise normalization of SING and the element-wise temporal normalization of adaptive optimizers such as Adam are not equivalent. To the best of our understanding, none of the theory developed in the paper could be applied to other adaptive optimizers. Take Theorem 3.1, for example. It relies on the fact that SING takes steps with constant size due to normalization. Adaptive optimizers do not have this property. For adaptive optimizers, the size of the steps can be altered easily, as described in the third part of the global comment. **Q1: I would appreciate it if the authors could clarify what is the motivation to normalize the gradients element-wise when this is already done in any adaptive technique (in various forms that might not match SING exactly)** The normalization applied in SING is not element-wise but layer-wise, and most importantly, it is instantaneous and not a temporal running average. The motivation is to normalize the gradient so as to catch any pathological case (such as exploding and vanishing gradients) so that it does not interfere with the temporal statistics of adaptive methods. This normalization also avoids explosions such as the one depicted in Figure 4 of the paper and point 3) of the general comment. --- Rebuttal Comment 1.1: Comment: I thank the authors for their clarifications, which have alleviated some of my concerns. However, I will keep my score due to incremental novelty and inadequate experimental validation for a practical paper. The later is especially concerning, given that the authors propose a general purpose optimizer, without much evidence for its performance in medium to large scale experiments. My suggestion for the authors is to gather more experimental evidence for their claims (not on CIFAR) and resubmit to another venue. --- Reply to Comment 1.1.1: Comment: We thank the reviewer for his reply. The reviewer mentions that our experimental validation is inadequate (suggesting it is limited to CIFAR). We would like to clarify that in addition to CIFAR100 the paper also features evidence about the better performance of SING on *"medium to large scale datasets"* on several tasks, as it features experiments on - Image classification on ImageNet (Section 5.1, Table 1) - Text translation on IWSLT14 (Section 5.3, table 4) - Question answering on SQuAD (Section 5.3, Table 4) - Multiple choices question answering on SWAG (Section 5.3, Table 4) We also use a wide range of architectures ranging from residual networks to transformers. In addition, we would like to point out that this is a new remark and was not included as a weakness in the initial review, nor by the other reviewers, which on the contrary, had positive comments regarding the variety/scale of the numerical experiments [Ahr6, QCf2, JvsR, CMfs] even including it among the paper strengths [Ahr6, QCf2, JvsR].
Summary: The paper proposed a simple and hyper-parameters-free way to improve the stabilization and generalization properties of optimizers used in deep-learning scenarios. They show that with gradient centralization and gradient normalization methods, SING can escape the local minima with large step sizes theoretically. The authors provide several experiments on datasets like ImageNet-1K, RDE, and some NLP tasks to show the superiority of SING together with popular optimizers like AdamW. They also give out some other theoretical results like convergence and invariance properties. Strengths: 1. The experiments on real datasets show that SING+AdamW performs significantly better than other baselines at image classification, depth estimation, and NLP tasks. This efficient method is also simple and not requires additional hyper-parameters. 2. The authors show that SING can escape the basin of attraction of the critical point when the step size is sufficiently large, and the stepsize threshold is inversely proportional to the network's depth, while GD cannot. And the experiments result (Figure 4) show that SING can stabilize the performance of the optimizers like AdamW. 3. The paper is well-writen and easy to follow. Weaknesses: 1. Although the empirical results are remarkable, the novelty of this paper is limited. As mentioned in the paper, gradient centralization[1] and gradient normalization[2,3,4] are common methods in the previous works, and this paper combines these two methods and systematically investigates the properties of SING. 2. The theoretical analysis just focuses on the gradient rather than combining it with momentum and scheduler, but since the gradient is normalized and the step size is large, the momentum and learning rate scheduler is critical for the global convergence, like in Thm 3.3, 3.4, $\eta$ is small, but $\eta$ needs to be large to escape local minima from Thm 3.1. 3. Thm 3.3, 3.4 requires that the mini-batch size B be some concrete value, this is too strict and it's better to relax this assumption. [1] Hongwei Yong, Jianqiang Huang, Xiansheng Hua, and Lei Zhang. Gradient centralization: A new optimization technique for deep neural networks. In Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part I 16, pages 635–652. Springer, 2020 [2] Ashok Cutkosky and Harsh Mehta. Momentum improves normalized sgd. In International conference on machine learning, pages 2260–2268. PMLR, 2020. [3] Ryan Murray, Brian Swenson, and Soummya Kar. Revisiting normalized gradient descent: Fast evasion of saddle points. IEEE Transactions on Automatic Control, 64(11):4818–4824, 2019. [4] Shen-Yi Zhao, Yin-Peng Xie, and Wu-Jun Li. On the convergence and improvement of stochastic normalized gradient descent. Science China Information Sciences, 64:1–13, 2021. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: 1. The results in figure 3 seem not fully converged, can the author propose the results with larger total training epochs? 2. If the author can relax the assumption of the mini-batch size B in Thm 3.3 and 3.4, similar to weakness-3? 3. The W(x) in definition 3.1 should be A(x) in the later paper? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: The author mentions that their method cannot be used together with LayerNorm or LayerScale, and the theoretical results not incorperate with AdamW. I don't find ethical or immediate negative societal consequences in this work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: First, we would like to thank the reviewer for reviewing our submission. 1) **About the limited novelty** The novelty of our work indeed comes from the combination of multiple ideas but also from the theory developed to explain the behavior of such a combination. The relevance of the proposed combination is demonstrated in the ablation study (Table 2 of the paper) since each individual previous method works worse than the proposed method. Furthermore, we utilize the combination to tackle the problem of training stability, which is also novel because none of these existing methods tackle it (*i.e.* all were introduced for other goals). 2) **The theoretical analysis just focuses on the gradient rather than combining it with momentum and scheduler, but since the gradient is normalized and the step size is large, the momentum and learning rate scheduler is critical for the global convergence, like in learning rate in Thm 3.3, 3.4, is small, but it needs to be large to escape local minima from Thm 3.1.** Most optimizers used in practice suffer from similar trade-offs, which is why learning rate scheduling is used. SING makes it easier to tune the hyper-parameters, as the normalization gives some independence with respect to the energy landscape (Theorem 3.2). SING is to be used exactly like any other optimizer regarding scheduling: a warmup followed by a high learning rate that is decayed over time. We refer the reviewer to point 4) of the global comment for more details about our theoretical analysis. **3) + Q2: Thm 3.3, 3.4 requires that the mini-batch size B be some concrete value, this is too strict and it's better to relax this assumption.** Unfortunately, a condition on the batch size is necessary with the set of hypotheses we considered. These hypotheses and conditions of the batch size are not unusual (see *e.g.* [1]). The intuition is that in the non-convex stochastic case if one wants to reach a more precise solution, one must reduce the variance of the gradient. To do so, the batch size must be increased, which aligns with the common rule of thumbs in the deep learning community that larger batch size is preferable to achieve better training. The same reasoning works for the learning rate. However, we do not think these hypotheses precisely capture the entire structure of neural networks; see [4] for more details. We refer the review to point 4) of the general comment for a longer discussion on the theoretical results. **Q1: The results in figure 3 seem not fully converged, can the author propose the results with larger total training epochs?** The results of Figure 3 are typical for a cosine decay scheduling. Increasing the number of epochs would widen the plot and wouldn’t change its aspect. Cosine decay is widely used in the literature nowadays [2,3], and we carefully verified it improved the performance for all the assessed optimizers before using it. See point 6) of the answer to reviewer JvsR for more details. **Q3: The W(x) in definition 3.1 should be A(x) in the later paper?** Thank you for pointing it out. As stated in the global comment, we will make sure to fix it. [1] Zhao, S. Y., Xie, Y. P., & Li, W. J. (2021). On the convergence and improvement of stochastic normalized gradient descent. Science China Information Sciences, 64, 1-13. \ [2] Liu, Z., Mao, H., Wu, C. Y., Feichtenhofer, C., Darrell, T., & Xie, S. (2022). A convnet for the 2020s. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 11976-11986). \ [3] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., ... & Guo, B. (2021). Swin transformer: Hierarchical vision transformer using shifted windows. In Proceedings of the IEEE/CVF international conference on computer vision (pp. 10012-10022). \ [4] Ma, S., Bassily, R., & Belkin, M. (2018, July). The power of interpolation: Understanding the effectiveness of SGD in modern over-parametrized learning. In International Conference on Machine Learning (pp. 3325-3334). PMLR. --- Rebuttal Comment 1.1: Comment: Thank you for your reply, I think it addresses most of my concerns. I will raise my score and wait for the response from other reviewers.
Summary: In this paper, authors have proposed a method (called SING) for stabilizing the optimization algorithms used in training of deep models. The proposed method is based on only a layer-wise standardization of the gradients without introducing any additional hyper-parameters. In addition, a theoretical analysis for convergence to a stationary point is provided. Extensive empirical simulations show improvement of the training performance when the existing optimization algorithms use the proposed approach on various tasks. Strengths: In general, the paper is well-written, and concepts have been presented in an accessible way. Providing theoretical results, including the convergence and invariance properties provide more credibility to the proposed method. Furthermore, experiments on different architectures and on various datasets is another strength point of this paper. Weaknesses: I need some clarifications on the followings: 1 - As mentioned in the theorems 3.3 and 3.4, convergence is guaranteed only to a stationary point. On the other hand, Theorem 3.1 states that the algorithm can escape from a narrow local minimum. How can SING guarantee that the stationary point is a local minimum (what happens if the algorithm converges to a saddle point or even local maximum) ? 2 - What defines the narrow local minimum and the wide local minimum. There is no curvature information/notion in Theorem 3.1 to distinguish local flat minimum from the sharp one. 3 - In Theorem 3.3, $\epsilon^2$ is given by $\sigma^2/B$, so to have an arbitrary small error on the expectation of the gradient at some stationary point, $\sigma$ should scale as $\mathcal{O}(\frac{\sqrt{B}}{D})$. My question is for a very large model, (i.e., $D$ is huge), does the assumption (8) hold for every $x\in\mathbb{R}^p$? I am not sure how the assumption holds for a highly non-convex loss in a large deep model? This assumption is stronger assumption than other approaches. Typically, ADAM, and other optimization in deep learning either assume some level of convexity or use somehow reasonable assumptions like small gradient, or bounded sequence of estimates, etc., 4 - Regrading the previous point, it is a good idea to run an experiment to illustrate the effect of $D$ on the training performance with SING. 5 - Do the results in experiment section (Table 1, 2, and 3) show the validation accuracy or the training accuracy ? Please clarify this. 6 -The convergence result doesn't provide any insight for the generalization to the unseen data. It is a purely an optimization perspective. Technical Quality: 3 good Clarity: 3 good Questions for Authors: The largest ball contained within the basin of attraction in Definition 1.1 is denoted by $\mathcal{B}$; however, in other places, authors use $A()$, am I right? I couldn't find the definition of $(\epsilon, \phi)$-stationary point used in Theorem 3.4. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: Please see my comments for Weaknesses and Questions. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: First off, we would like to thank you for your time and comments, and we hope that our answers will help clarify the paper. 1) **Convergence to a stationary point** As pointed out in Definition 3.1, for a saddle point $\mathcal{B}(x^*) = \lbrace x^*\rbrace$ hence $r=0$, and therefore Theorem 3.1 guarantees that the saddle point is escaped in exactly one iteration. Take for instance $f(x) = x^3$, in this case $x^* = 0$, $W(x^*) = \mathbb{R}\_{-}$ and the largest ball centered around $x^*$ contained within $W(x^)$ is $\lbrace x^* \rbrace$. In practice, gradient descent will never converge to that point except if you fall exactly on it, which has a probability $0$ of happening. The same goes for local maxima: in this case $W(x^*) = \lbrace x^* \rbrace$ and $B(x^*) = \lbrace x^*\rbrace$. For example, take $f(x) = -x^2$, in this case $x^* = 0$, $W(x^*) = \lbrace 0\rbrace$ and $\mathcal{B}(x^*) = \lbrace 0\rbrace$. For more details, we refer the reviewer to [1]. 2) **Narrow vs wide and flat vs sharp** A narrow local minimum corresponds to a point $x^*$ such that $\mathcal{B}(x^*)$ has a small radius. Conversely, if the radius is high, it is considered as being wide. We agree with the reviewer that this terminology is unclear and will add clarification to the paper. The local sharpness, however, is contained within the gradient norm. As can be seen in Equation (5), the flatter the local minimum (and hence the lower the gradient norm), the harder it is for SGD to escape. As SING is independent of the gradient norm, it is independent of the sharpness and hence escapes any local minimum, provided it is narrow enough. Lastly, sharpness/flatness is an easy-to-understand notion in 2 or 3D but is harder to manipulate and understand in higher dimensions. Indeed, the sharpness is a local information, and a local minimum could be flat and sharp. That is why we decided not to introduce and manipulate this notion in more detail. 3) **Assumption (8)** The variable $\sigma$ measures how well the stochastic gradient approximates the real gradient. It is a constant that is more likely to depend on the dataset than on the network’s architecture as long as the gradient is regular enough (Assumption (9)). Assumption (8) is a very classical assumption made in non-convex optimization in the stochastic setting (see *e.g.* [2, 9]). We respectfully disagree with the reviewer that this “assumption is stronger than other approaches” when other approaches consider a deterministic convex setting, and we consider a non-convex stochastic one. If necessary, assumption (8) could be replaced by $\forall t, \mathbb{E}[\|\nabla F(x\_t) - \nabla f(x\_t)\|\_2^2] \leq \sigma^2$ as it is only used in this case. 4) **Effect of D** While we agree it would have been interesting, such an experiment would have been hard to conduct in a principled and rigorous way as typical models only exist in three to five sizes. A (limited) version of such a study can be seen in Table 1, where ResNet18 and ResNet34 are evaluated on ImageNet when ResNet18 has $D=62$ and ResNet34 has $D=110$. 5) **Training vs validation accuracy** In every table of this paper, the validation accuracy is reported. The only time a training metric is reported is in Figure 4 to better highlight the explosion in the training loss. We thank the reviewer for pointing this ambiguity out and will clarify this in the paper. 6) **The convergence result doesn't provide any insight for the generalization to the unseen data. It is a purely an optimization perspective.** The question of generalization is an open problem, and the community does not always agree on a correct definition [8]. It is also largely a statistical problem that is outside the scope of this paper. However, given the apparent link between the width of the local minima and generalization [3,4,5,6,7], Theorem 3.1 suggests that by controlling the learning rate, SING is more likely to skip narrow local minima and generalize better. **Q1 - The largest ball contained within the basin of attraction in Definition 1.1 is denoted by B; however, in other places, authors use A, am I right?** Thank you for pointing it out. As pointed out in the global comment, we will make sure to fix it. **Q2 - I couldn't find the definition of $(\epsilon, \phi)$-stationary point used in Theorem 3.4.** It is defined on lines 163-164: it is a point such that in the limit $\epsilon \to 0$, the norm of the gradient converges to a point in $\text{Ker}(\phi)$. [1] Murray, R., Swenson, B., & Kar, S. (2019). Revisiting normalized gradient descent: Fast evasion of saddle points. IEEE Transactions on Automatic Control, 64(11), 4818-4824. \ [2] Zhao, S. Y., Xie, Y. P., & Li, W. J. (2021). On the convergence and improvement of stochastic normalized gradient descent. Science China Information Sciences, 64, 1-13. \ [3] Cooper, Y. (2018). The loss landscape of overparameterized neural networks. arXiv preprint arXiv:1804.10200. \ [4] Goodfellow, I. J., Vinyals, O., & Saxe, A. M. (2014). Qualitatively characterizing neural network optimization problems. arXiv preprint arXiv:1412.6544. \ [5] He, H., Huang, G., & Yuan, Y. (2019). Asymmetric valleys: Beyond sharp and flat local minima. Advances in neural information processing systems, 32. \ [6] Keskar, N. S., Mudigere, D., Nocedal, J., Smelyanskiy, M., & Tang, P. T. P. (2016). On large-batch training for deep learning: Generalization gap and sharp minima. arXiv preprint arXiv:1609.04836. \ [7] Pennington, J., & Bahri, Y. (2017, July). Geometry of neural network loss surfaces via random matrix theory. In International conference on machine learning (pp. 2798-2806). PMLR. \ [8] Zhang, C., Bengio, S., Hardt, M., Recht, B., & Vinyals, O. (2021). Understanding deep learning (still) requires rethinking generalization. Communications of the ACM, 64(3), 107-115. \ [9] Bottou, L., Curtis, F. E., & Nocedal, J. (2018). Optimization methods for large-scale machine learning. SIAM review, 60(2), 223-311
Summary: The paper presents SING (StabIlized and Normalized Gradient), a new method designed to enhance the stability and generalization capabilities of the Adam(W) optimizer. SING involves a layer-wise standardization of the gradients that are input into Adam(W), and does not require the introduction of additional hyper-parameters. This makes it straightforward to implement and computationally efficient. The authors demonstrate the effectiveness and practicality of SING through improved results across a broad range of architectures and problems, including image classification, depth estimation, and natural language processing. It also works well in combination with other optimizers. In addition to these experimental results, a theoretical analysis of the convergence of the SING method is provided. The authors argue that due to the standardization process, SING has the ability to escape local minima narrower than a certain threshold, which is inversely proportional to the depth of the network. This suggests that SING may offer significant advantages in training deep neural networks. Strengths: 1. As the authors have claimed, the proposed method can be applied in a plug-and-play style with impressive applicability to a lot of tasks, datasets and optimizers. Without additional hyperparameters introduced, I think this work has great potential to become a standardized training technique with big impact in the community. And the authors did provide the elegantly implemented source code in PyTorch which I think is already close to ready to be included in the standard PyTorch library. 2. Empirical performance is impressive with huge improvements over baseline optimizers in many settings. 3. Mostly the paper is written in quality and easy to follow with only a few ambiguities, which I will mention in the weaknesses part. Weaknesses: 1. Firstly, I believe there is a misalignment between the theoretical analysis and practical method. To be specific, the analysis in Theorem 3.1 compares the learning rate needed for escaping local minima for SING and SGD. However, as the authors claimed previously, the SING algorithm is proposed to overcome the limitations of Adam(W). Therefore, it would be better to directly analyze SING against Adam(W), which is also mainly compared against in the experiment part. 2. Some issues in terms of writing. One is that the authors did not formally formulate the centralize operation in math equations but only in codes, which could be confusing for readers who are not familiar with PyTorch framework. I strongly recommend the authors to provide strict math formulations instead of ambiguous codes only. For example, at least I am still confused the mean operation is executed over which dimension and what the meaning is for that averaging. A second issue in writing is that it seems the authors interchangeably use the terms "learning rate" and "step size" in Section 1 but treat them as different things in Section 3. I hope the authors could clarify the differences or use one term consistently. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. According Figure 4, the spikes occur once during one training process. Do the authors have any comments on what causes the spikes exactly? 2. The second row of Table 2 seems to be wrongly presented. 3. What if the momentum also comes into the picture to be combined with SING? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: First off, we would like to thank you for your kind comments and your detailed feedback. 1) **Misalignment between theoretical analysis and practical method** We refer the reviewer to point 4) of the global comment for more details about our theoretical analysis. 2) **Confusing definition of the centralization operation** This is a good point. We did not detail it more in the paper as the operation has already been introduced and thoroughly discussed in [1]. We agree that the definition is ambiguous even in the original paper, and we will try to clarify it in the appendix. 3) **“learning rate” and “step size”** We agree that the way these two terms are used is confusing in the paper. We identify as “learning rate” the value $\eta$ defined in Equation (2). We refer to “step size” as being the size of the steps taken by the algorithm *i.e.* if the updates are $x\_{t+1} = x\_{t} - \eta g\_t$ it is defined as $\eta \|g\_t\|\_2$. We will make sure to correct the paper to make the difference clearer. **Q1: According Figure 4, the spikes occur once during one training process. Do the authors have any comments on what causes the spikes exactly?** We have identified the spike to come from an unexpected rise in the gradient magnitude from one iteration to the other. This recent preprint reports a similar phenomenon on large language models [8]. As pointed out in point 3) of the general comment, the temporal averaging of Adam(W) fails to filter out the sudden increase and creates the spike. The sudden rise is followed by smaller updates which prevent Adam from recovering to the original level. However, the cause of the sudden growth is unknown (an explanation is conjectured in [8]). In this case, SING works particularly well because the normalization operation in Equation (2) prevents the gradient magnitude from changing during training. **Q2: The second row of Table 2 seems to be wrongly presented.** Table 2 is an ablation study. In more detail, we study what happens if we remove one component from SING and what happens if we add this component to AdamW. In the second row, we see that adding the normalization defined in Equation (2) to AdamW improves its performance. We also see that removing the normalization component from SING prevents it from converging altogether for all the assessed learning rates and weight decays. NC stands for no convergence. The maximum learning rate reported corresponds to the maximum learning rate for SING and hence isn’t reported in this row of Table 2 since it didn’t converge. We will modify the description of Table 2 to include a better explanation. **Q3: What if the momentum also comes into the picture to be combined with SING?** First, momentum is combined with SING, as shown in Algorithm 2. In fact, any momentum strategy could be used with SING. SING modifies (and standardizes) the gradient passed as input to the optimization algorithm (which can contain momentum). Secondly, one could think about ways of incorporating momentum within the computation of the gradient norm (or the gradient mean). We argue it would be counterproductive, as temporal averaging would fail to filter out local and sudden increases in the gradient norm. This is what happens in Adam(W) and what we describe in point 3) of the general comment. Hence, doing so is very likely to remove the stability properties of SING.
Rebuttal 1: Rebuttal: First, we thank all the reviewers for their time, consideration, and hard work. 1) **Reparameterization of Gamma [JvsR]** Following the advice of reviewer JvsR, we decided to modify Equation (3) such that $\Gamma(x)\_i = \sqrt{D} \|x\_{I\_k}\|\_2$ for $k \in [\\!|1,D|\\!]$ and $i \in I\_k$. This modification corresponds to rescaling the normalized gradient norm to become independent of the network's depth. With this reparameterization, we get the interesting property that $\left\|\frac{x}{\Gamma(x)}\right\|\_2 = 1$. This slightly changes the conclusion that “SING can escape local minima narrower than a threshold that is inversely proportional to the network’s depth” to “SING can escape local minima narrower than a threshold that is independent of the network’s architecture”, yet the message remains broadly the same. We believe that this property better captures our intuitive understanding of the good performance of SING. This reparameterization of $\Gamma$ simplifies Theorem 3.1 where $\eta\_{\text{SING}} \geq 2r$ (getting rid of the $\sqrt{D}$ term, which was the cause of misunderstandings). Similarly, for Theorems 3.3 and 3.4, the dependencies on $D$ are replaced by dependencies in $\sqrt{D}$, leading to a tighter bound. The paper's conclusions remain unaltered, but their justifications are now more concise and clearer, thus making our presentation more impactful and convincing. Additionally, such a change does not impact Adam(W) in any way since the updates of Adam(W) are invariant to a constant rescaling of the weights. For SGD, the optimal learning rate found must be multiplied by $\sqrt{D}$ to recover the original behavior. 2) **Typo in Section 3.1 [CMfs, Ahr6]** We corrected the typos reported by the referees, notably where we used $A(x^*)$ instead of $\mathcal{B}(x^*)$. We thank reviewers CMfs and Ahr6 for pointing it out. 3) **About the cause of the spikes and why Adam(W) is sometimes unstable [JvsR, QCf2, Ahr6, CMfs, hVDH]** Adaptive optimizers such as Adam normalize the gradient element-wise with normalization factors that are computed as temporal averages. This means that if the gradient norm’s magnitude increases by a large factor from one iteration to the other (which happens frequently with transformers, as was reported in https://arxiv.org/abs/2304.09871), the temporal mean won’t be able to filter out the increase right away. For instance, applying Adam to the sequence $[1, \dots, 1, 100, 1, \dots, 1]$ yields Figure 1 of the PDF document attached to the rebuttal. As the figure shows, the update's magnitude increases and then decreases to an abnormal level. This results in the spike visible in Figure 4 of the paper and explains why AdamW fails to recover after the spike. With SING, the normalization is done based on the instantaneous norm of the layer gradient, preventing any explosion of this kind. 4) **Theoretical results limited to SGD [QCf2, JvsR, CMfs]** Reviewers QCf2, JvsR, and CMfs pointed out that the theoretical results were limited to SGD. We agree that the current theory does not cover all the aspects of a practical setting (such as *e.g.* varying learning rates or momentum). A proof for Adam(W) would take considerable effort and time and could be the focus of future works. For example, this 30-page paper [1] illustrates how challenging it is to provide a correct proof of convergence of Adam for a constant learning rate. Adapting it to the varying learning rate setting would be the topic of another publication and is outside of the scope of this very submission. Furthermore, to the best of our knowledge, the theoretical properties of AdamW (such as convergence) have not yet been shown. The provided theory is a first step towards a better understanding of SING’s properties, and we still believe that it provides interesting insights into the method. As for Theorems 3.3 and 3.4, please be aware that these theorems do not, in fact, prove the advantage of the proposed normalization. However, they enable us to check that the method is consistent *i.e.* that its convergence is maintained under reasonable conditions. **References** For this rebuttal, we will cite the following papers: \ [1] Défossez, A., Bottou, L., Bach, F., & Usunier, N. (2020). A simple convergence proof of adam and adagrad. arXiv preprint arXiv:2003.02395. \ [2] Liu, Z., Mao, H., Wu, C. Y., Feichtenhofer, C., Darrell, T., & Xie, S. (2022). A convnet for the 2020s. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 11976-11986). \ [3] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., ... & Guo, B. (2021). Swin transformer: Hierarchical vision transformer using shifted windows. In Proceedings of the IEEE/CVF international conference on computer vision (pp. 10012-10022). \ [4] Touvron, H., Cord, M., Douze, M., Massa, F., Sablayrolles, A., & Jégou, H. (2021, July). Training data-efficient image transformers & distillation through attention. In International conference on machine learning (pp. 10347-10357). PMLR. \ [5] Wang, W., Dai, J., Chen, Z., Huang, Z., Li, Z., Zhu, X., ... & Qiao, Y. (2023). Internimage: Exploring large-scale vision foundation models with deformable convolutions. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 14408-14419). \ [6] Xiao, T., Singh, M., Mintun, E., Darrell, T., Dollár, P., & Girshick, R. (2021). Early convolutions help transformers see better. Advances in neural information processing systems, 34, 30392-30400. \ [7] Chen, X., Hsieh, C. J., & Gong, B. (2021). When vision transformers outperform resnets without pre-training or strong data augmentations. arXiv preprint arXiv:2106.01548. \ [8] Molybog, I., Albert, P., Chen, M., DeVito, Z., Esiobu, D., Goyal, N., ... & Zhang, S. (2023). A theory on adam instability in large-scale machine learning. arXiv preprint arXiv:2304.09871. Pdf: /pdf/c7f49b92768bfce2535b6acda7ecf69a43e1fbf4.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: The paper proposes SING, a plug-and-play approach to enhance optimizers without introducing additional hyperparameters. The idea consists of standardizing gradients in a layer-wise manner prior to the host optimizer’s execution, and is motivated by factors such as easier escaping of narrow minima and invariance properties. Experiments on image classification, depth estimation, and NLP tasks such as NMT and QA are used to assess the proposed method’s performance and measure improvements over host optimizers. Strengths: The paper is written clearly and well-organized. The proposed method is easy to implement and works in a plug-and-play fashion. The layer-wise gradient standardization can be viewed as a gradient pre-processing step and hence completely agnostic to the host optimizer, making the approach general and flexible. The authors provide theoretical analyses to motivate and better understand SING. The convergence analysis considers the smooth non-convex bounded variance setting, which I believe to be a good balance between assumptions and how well it captures network training. Experiments include different tasks from two domains and distinct network architectures, including ResNets and Transformer-based models. Weaknesses: The theoretical analysis is not helpful in motivating or better understanding SING’s benefits. While SING adopts layer-wise normalization, it seems that the analysis holds given any partition of the $p$ many parameters in $D$ many sets – i.e. the fact that each of the $D$ tensors is assumed to correspond to a different layer is not necessary nor used anywhere in the analysis. We can then consider the effect of variable $D$ for a fixed $p$ (grouping the parameters in larger or smaller sets, say filter-wise, kernel-wise, or even parameter-wise), and we recover Normalized Gradient Descent (NGD) with $D=1$ and a form of sign SGD with $D=p$ (this has been studied in previous works to understand how normalizing gradients of coarse/fine-grained parameter sets can affect performance and convergence). This has two concerning implications: - For Theorem 3.1 Since $||\frac{g}{\Gamma (g)}|| = \sqrt{D}$, it follows that the post-processed gradients will scale (in norm) as $\sqrt{D}$, and hence $\eta_{SING}$ in Theorem 3.1 is fundamentally ‘undoing’ this scaling, hence claiming that ‘SING can escape local minima narrower than a threshold that is inversely proportional to the network’s depth’ is not very meaningful. A similar argument would be to pre-process the gradient by scaling it up by 100 and claiming that now we can use a 100x smaller learning rate, which, although technically correct, is not useful. While I’m aware that the layer-wise normalization will actually change the update direction and not just scale it, it seems that this change in direction does not play any positive role in the presented analysis. Finally, one could simply set $D=p$ (i.e. artificially view each parameter as an independent layer) and Theorem 3.1 would state that the sufficient learning rate to escape narrow minima would actually decrease way more aggressively (as 1/sqrt(# parameters) instead of 1/sqrt(# layers)) – this is clearly not actually useful since we’re just scaling up the (norm of) post-processed gradients (compared to the layer-wise case) and compensating by scaling down the learning rate. - For Theorems 3.3. and 3.4 To achieve stationarity $\delta$ independent of $D$ (i.e. $\epsilon \propto \frac{\delta}{D}$) we would set $\eta = \Theta(\frac{\delta}{D})$, $T = \Theta(\frac{D^2}{\delta^2})$, $B = \Theta(\frac{D^2}{\delta^2})$, where we are ignoring dependencies on $F(x_0)$ and $L$. This means that, in the original case where $D =$ # layers, the guarantee requires both the number of iterations and the batch size to increase quadratically with the depth of the model, which is concerning. Moreover, if we set $D=1$ (i.e. NGD) we actually minimize the required number of iterations and batch size. Therefore, these results do not motivate or support layer-wise normalization, and actually question this design choice by offering significantly better guarantees for NGD. - Other points Although the theoretical analysis considers updates following Eq. 2, the experimental studies heavily focus on AdamW + SING, which is not well-discussed. My main concern in this case is that the normalization from SING affects both $m_t$ and $v_t$ in the numerator and denominator of AdamW, respectively. It is unclear what is really happening in this case. It seems that for long enough training time windows, if the layer-wise gradient norms remain roughly constant then the normalization effect would cancel out, reducing to AdamW’s updates (that is, if the $\epsilon$ term in the denominator of AdamW is negligible). Accounting for $\epsilon$, on the other hand, yields AdamW’s updates except with different values for $\epsilon$ for each layer, each scaled by the layer’s gradient norm. Although it is unlikely that layer-wise gradient norms remain roughly constant for many enough iterations, this hints that SING’s normalization might be affecting the size of $\sqrt(v_t)$ compared to $\epsilon$ differently for each layer. This could result in confounding effects since the value of $\epsilon$ (compared to $\sqrt(v_t)$) can play a major role in the behavior of Adam-like methods, potentially improving the performance in multiple settings. The experimental analysis could also be substantially improved. The hyperparameter tuning strategy (choosing best learning rate, then fixing it to choose best weight decay) can easily lead to suboptimal values, especially for SGD and any adaptive method that does not inherently incorporate AdamW’s weight decay decoupling (see Fig. 1 and 2 of Loshchilov & Hutter). This can lead to an unfair advantage to AdamW and AdamW + SING over all other methods. The cosine schedule is also known to improve AdamW’s performance and more often than not harm SGD (compared to step-wise), hence it would be valuable to also collect results with a step-wise schedule for a more comprehensive and clear comparison. There is also some loss in novelty from the fact that the actual method that plays the key role in the experiments also adopts LookAhead and softplus calibration. In particular, centering is not novel although it has been explored more extensively for the 2nd moment estimate (centered RMSProp, AdaBelief, SDProp, ACProp, etc), and layer-wise gradient normalization for adaptive methods has also been studied (AdaShift & AvaGrad – none of the two are cited or discussed). These methods should be included in the comparison to have a clear picture that would allow a proper assessment of SING. Finally, there are also concerns regarding the other vision tasks. It is unclear what was the exact ResNet-18 model used for CIFAR-100: if it is a ~11M param model, then it is a wider version (DeVries ResNet-18) which differs from the one originally proposed by He et al. and achieves over 77% acc. on CIFAR-100 when trained with SGD (see LookAhead’s paper and DeVries&Taylor). This would indicate issues with the CIFAR-100 results in Table 1, since results with SGD would be ~1.4% worse even with additional augmentation and 100 extra training epochs. As for depth estimation, the dataset is synthetic and not well studied, hindering a proper assessment of its results. Nonetheless, ViT’s are typically well-trainable with SGD if warmup and gradient clipping are employed (which is common practice for these models). The fact that SGD achieved 0.25% accuracy suggests that the experimental setup should be revised – warmup and grad clipping should be adopted for SGD since they are common practice, especially if SGD only achieves 0.25% accuracy without them. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: See my points above regarding points for improvement. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 3 good Contribution: 1 poor Limitations: The authors discuss limitations satisfactorily. Flag For Ethics Review: ['No ethics review needed.'] Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to thank reviewer JvsR for their detailed and insightful review. The reviewer’s feedback allowed us to improve the presentation of the paper. We will make sure to thank them in the final version of the paper. We hope our modifications and answers will help the reviewer reconsider their assessment of the paper. 1) **No theoretical justification for the layer-wise normalization.** We agree with the reviewer. We currently have no theoretical analysis showing the advantage of layer-wise normalization. In the paper, we decided to adopt a general setting that is network architecture agnostic. Therefore, the provided arguments apply to any partition of the parameters. The chosen normalization is motivated by intuition and practicality. Take, for instance, the case of a network with a linear layer. SING for a linear layer has two effects: (i) standardizing the inputs to the layer and (ii) removing the magnitude of the partial derivative of the loss. The layer-wise normalization is adapted to how current optimizers are implemented, which work layer by layer. Computing the norm across different tensors would require making additional passes through all the layers of the network at every optimization step. We ran additional experiments to verify if layer-wise partitioning is a good choice. The results are available in Table 2 of the companion PDF. The experiments suggest that layer-wise partitioning works better than the alternatives. We will add a paragraph discussing this issue in the paper and add these results in the supplementary material. 2) **About Theorems 3.3 and 3.4** Please see point 1) of the general comment. The dependency is now linear instead of quadratic. 3) **About the mix of AdamW and SING** Although the complexity of Adam's formula prevents a detailed theoretical study of the mix, our intuition is that the layer-wise normalization of SING and the element-wise temporal normalizations of Adaw(W) work well together. Normalizing the gradient layer-wise could shrink the magnitude of some elements within a parameter tensor and prevent these elements from being updated. The element-wise update of Adam reinstates the magnitude of these elements. Furthermore, the normalization of the gradient prevents the explosion described in Figure 4 of the paper and in [8]. See point 3) of the global answer for more details. Furthermore, as pointed out by the reviewer, if the layer-wise gradient norm remains constant, the normalization is rendered useless. We argue that it is actually a good point: when the training procedure is very stable, using SING + AdamW is equivalent to AdamW. In other cases, SING helps to ensure stability during training. We propose to add these discussions at the end of section 3, noting that more work needs to be done to fully understand the interplay between these two normalizations. 5) **About the hyper-parameter tuning strategy** This paper mainly compares against AdamW as it is the de facto algorithm for most deep learning applications [2,3,4,5]. We agree we have not comprehensively tried all the configurations possible for all methods, but we strived to be as accurate as our computation budget allowed. Our main focus was two-fold: 1) ensuring a fair comparison between an optimizer and its version with SING, and 2) avoiding choosing a set of hyper-parameters that would have resulted from an overfit on the validation set. However, we agree that it might not be optimal for every optimizer. For the experiments on ImageNet and NLP however, the hyper-parameters used were the ones reported in the original sources, and while we tried searching for better ones, we haven't succeeded and ended up using them. These hyper-parameters were likely found using much more computations than us and, therefore, could also play to our disadvantage. 6) **About the learning rate scheduler** We checked that the cosine scheduler was not harming the performance of SGD before using it. In every case, we found it was at least as good as the stepwise schedule. Using the cosine scheduling with SGD on ImageNet with a ResNet18 gives a final accuracy of 72.36% against 72.25% when using a stepwise scheduler (using the ffcv recipe at https://github.com/libffcv/ffcv-imagenet which does not exactly matches our setting). Furthermore, using stepwise scheduling introduces several additional hyper-parameters, complicating the hyper-parameter tuning. 7) **Comparison against other adaptive methods** Following the reviewer's suggestion, we evaluated the impact of SING on AdaShift and AvaGrad. Limited computational resources allowed only RDE dataset comparison. The results are available in Table 4 of the companion PDF. Although these methods are adaptive, they seem to require gradient clipping to work. This was not the case for Adam. Note that adding SING to these methods largely improves their results without extra hyperparameters. We will report all these results in the supplementary material and mention them in the main paper. 8) **Concerns about the ResNet18 used** We used the ResNet18 of the torchvision library, which, according to the documentation, is the implementation of the original ResNet18 by He et al. 9) **About the trainability of ViT with SGD** In our experience, we found the ViT to be notoriously hard to train with SGD, in alignment with other claims in the literature [4,6,7]. Furthermore, all of our experiments feature warmup, as pointed out in line 219 of the manuscript. We launched additional experiments mixing SGD with gradient clipping, and the results are in Table 3 of the companion PDF. While it helps the training, the performance is below SGD + SING. We will add these results to the paper. However, gradient clipping introduces an additional hyper-parameter (which might intuitively differ from one layer to another), whereas SING provides stability without any additional hyper-parameter.
null
null
null
null
null
null
Convergence of mean-field Langevin dynamics: time-space discretization, stochastic gradient, and variance reduction
Accept (spotlight)
Summary: This paper studies the convergence of various algorithms falling under the umbrella of *Mean-Field Langevin Dynamics* (MFLD). Those algorithms encompass diverse approximations of the usual Langevin dynamics $$ dX_t = -\nabla\frac{\delta F(\mu_t)}{\delta \mu}(X_t) + \sqrt{2\lambda}dW_t $$ on three main points: - when the measure $\mu_t$ is approximated by a finite-population version, i.e. the empirical measure $\mu_X$ of $(X_t^1, \dots X_t^N)$ all following the same dynamics, - when the dynamics are discretized according to a given time-step $\eta$, - and finally when we only have access to a noisy version $v_t^i$ of $\nabla\frac{\delta F(\mu_X)}{\delta \mu}(X^i)$. The authors provide quantitative bounds for the convergence of $\mu_X$ to the minimum of an appropriately defined measure, which minimizes an entropy-regularized objective. They first provide a one-step bound in the general case, which is then turned into more precise convergence bounds depending on the version of $v_t^i$ considered. Strengths: This paper considers a very important and widely studied problem in the study of wide neural networks.The results obtained are impressive: even if the finite population approximation is borrowed from Chen et al., it is complemented by very precise and technical arguments about discretization and stochastic approximation, which makes this paper a major improvement on the state of the art. The paper flows decently well, and even the appendix proof are nicely explained and made as easy to follow as possible (which is not a simple task, given their technicality). The significance of the paper is made clear by expanding on several well-studied instances of the general problem. Weaknesses: On the significance side, the main drawback of the paper is the necessity for two regularization terms (one strong convexity term in the objective function $U$, and the entropic regularization term), and the heavy dependency of the bounds on those two parameters. I am aware, unfortunately, that this is for now a common restriction for any time-independent bounds on Langevin dynamics. The main drawback of this paper is that it is simply too technical for a Neurips submission. The presentation is good, but still suffers heavily from the 9-page (and even the 10-page, if accepted) limit, which forces a lot of inline math even for key equations (e.g. the log-Sobolev definition). This results in a very dense and sometimes hard to follow article, which could really benefit from a longer exposition. Notably, some examples (especially the first one) could be expanded upon, both to understand exactly the role of first-variation functional and the comparison with existing results on SGD/mean-field for 2LNNs. All in all, this paper is more suited for a (very good) journal than for Neurips itself; however, I cannot recommend rejection based on the significance of the results. Some parts of the appendix are also a bit rushed, especially Appendix A, which only lists the necessary conditions for the examples to fit Assumptions 1 and 2 without any proof. Technical Quality: 4 excellent Clarity: 2 fair Questions for Authors: - When you remove the entropic regularization term, the existence of a limit measure for the dynamics is not guaranteed anymore; however, some papers (e.g. Chizat and Bach '18) manage to recover results by assuming that there is still a convergence. Do you think this could also be the case for your discretization results ? - Can your methods be applied to directly show an approximation result similar to (Mei et al. '18), i.e. an approximation bound between $\mu_t$ and $\mu_{X_t}$ instead of a direct control on $\mathcal F(\mu)$ ? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 2 fair Contribution: 3 good Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you very much for your insightful comments. We address the technical points below. **Q:** *Some parts of the appendix are also a bit rushed, especially Appendix A, which only lists the necessary conditions for the examples to fit Assumptions 1 and 2 without any proof.* **A:** Thank you very much for pointing it out. We will extend some parts of Appendix including justifications of Assumptions 1 and 2 in the revision. **Q:** *some papers (e.g. Chizat and Bach '18) manage to recover results by assuming that there is still a convergence. Do you think this could also be the case for your discretization results?* **A:** Thank you for raising an interesting question. Indeed, Chizat and Bach (2018) showed global convergence by assuming convergence in the first place. That is, convergence is guaranteed only when the activation function is homogeneous and the solution satisfies some specific conditions which cannot be ensured beforehand (more precisely, if the solution converges to some distribution in $W_2$ distance, then the limit distribution is the global optimal, however whether the solution converges or not is not guaranteed). In contrast, in our setting global convergence is guaranteed due to the entropy regularization, similar to (Mei et al. 2018). If we completely remove the regularization, then the quantitative convergence rate (which is exponential) would be lost unless additional assumptions are imposed, and consequently, the uniform-in-time propagation of chaos estimate would be very challenging to establish. We intend to investigate this problem in the future. **Q:** *Can your methods be applied to directly show an approximation result similar to (Mei et al. '18), i.e. an approximation bound between $\mu_t$ and $\mu_{X_t}$ instead of a direct control on $\mathcal{F}(\mu)$?* **A:** Yes, we have a convergence of $\mu_{X_t}$ and $\mu_t$ to $\mu^*$ in terms of the Wasserstein distance, which also yields convergence of the Wasserstein distance between $\mu_{X_t}$ and $\mu_t$ by the triangular inequality. Once again, the main difference from Mei et al. (2018) is that their discretization error blows up exponentially over the time, whereas our bound remains stable for all $t>0$. We would be happy to clarify any concerns or answer any questions that may come up during the discussion period. --- Rebuttal Comment 1.1: Comment: Thank you for your instructive answers. I still think this would be more suited for a journal publication, hence I will maintain my grade, but this is a very good set of results.
Summary: This work considers the analysis of mean field langevin dynamics as implemented algorithmically. i.e, a) particle approximation b) time discretization and c) stochastic gradients. Under the assumption of certain logarithmic sobolev inequalities, prior works were mostly restrict Strengths: Extensive and explicit non-asymptotic convergence rates are obtained for the practically implemented algorithm with finite particles, time discretization and stochastic gradients, which is a great addition to the literature. Weaknesses: 1. What are the main technical insights in this work? It seems like an extension of Wibisono and Vempala's LSI analysis of LMC while accounting for the mean-field and stochastic gradients. 2. Assumption 4 seems non-standard and the existence of a.s. bounded large order derivatives of the stochastic gradient seems restrictive compared to the assumptions in SGD/ SGLD literature. Similarly Assumption 5 seems too restrictive. 3. What is the point of SVRG type algorithm when straightforward stochastic approximation like SGLD itself outperforms this? This is true in Table 1 for MFLD. We also refer to the Theorem 6 in [A1] for results on regular Langevin dynamics. 4. Can you compare your technique with the recent results which utilized the CLT structure in the batched noise to obtain sharp analysis of stochastic approximations such as SGLD? (see [A1]). In Theorem 3 in the current work, with a batchsize $B$, the error bounds will have a $\frac{\eta}{B}$ (similar to [A2]) Whereas [A1] gets a rate of $\frac{\eta}{B^2}$. Since the analysis techniques are close to that of SGLD, and one of the main contributions is the addition of stochastic gradients, it would be good if the authors compare the results to that of [A0,A1,A2] on a technical level. [A0] Non-convex learning via Stochastic Gradient Langevin Dynamics: a nonasymptotic analysis [A1] Utilising the CLT Structure in Stochastic Gradient based Sampling: Improved Analysis and Faster Algorithms [A2] Faster Convergence of Stochastic Gradient Langevin Dynamics for Non-Log-Concave Sampling Minor: Theorem 2, definition of $\mathcal{Y}_k$ needs to have $\eta_k$ The notation is a bit cluttered at many points, which makes the paper hard to read sometimes. I will give a borderline accept for now. Happy to improve my score once the authors provide a satisfactory response. Technical Quality: 3 good Clarity: 3 good Questions for Authors: See the weaknesses section Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you very much for your insightful comments. **Q:** *What are the main technical insights in this work? It seems like an extension of Wibisono and Vempala's LSI analysis of LMC while accounting for the mean-field and stochastic gradients.* **A:** First, we note that the analysis of the mean-field extension of Langevin dynamics is far from trivial, even in the continuous-time and infinite-width limit. As for the discretized algorithm, we need to establish a uniform-in-time propagation of chaos, which is technically challenging as we have to handle (1) the finite particle approximation, (2) the stochastic gradient approximation, and (3) the time discretization. Note that such propagation of chaos calculation is not required in Wibisono and Vempala's analysis due to the absence of mean-field interaction. Our main contribution is to derive these errors in one unified framework. In particular, (1) and (2) are technically demanding, and we obtained faster rate than existing work taking into account these approximation errors with an additional smoothness condition. Please refer to our Introduction section for a more detailed summary of our contribution. **Q:** *Assumption 4 seems non-standard and the existence of a.s. bounded large order derivatives of the stochastic gradient seems restrictive compared to the assumptions in SGD/SGLD literature. Similarly Assumption 5 seems too restrictive.* **A:** Please note that our theorem also holds without Assumption 4. What we intend to convey is that the convergence rate can be further improved if we assume Assumption 4 additionally. Indeed, if we looked at the statement of Theorem 2, there are two types of bounds: one is without Assumption 4 and the other is with Assumption 4. We will make this point more explicit in the revision. Also, Assumptions 5 and 6 are not new and isolated assumptions, but rather, sufficient conditions for Assumption 4 when applied to different gradient estimators; and similarly, we have a valid convergence rate without these assumptions. Hence, our analysis includes the standard SGLD setting as a special case. **Q:** *Can you compare your technique with [A1]?* **A:** Thank you very much for letting us know a relevant paper. Indeed, this paper is quite interesting and should be taken into account. Our bound (Theorem 2) is so general that the stochasticity is induced by not only the minibatch selection but also any other sources of randomness. Hence, we cannot directly compare with their result. On the other hand, for certain special settings (SGD-MFLD and SVRG-MFLD), we can adapt their result into our analysis. That is, we may replace the term $\sigma_{v,k}^2$ by their evaluation of the square of the conditional expectation of the stochastic gradient, which would give better bound than our naive bound. On the other hand, if we assume Assumption 4, then for the SVRG setting, the dependency on $\epsilon$ is $\min\\{\sqrt{n/\epsilon},1/\epsilon\\}$ while their bound only achieves $1/\epsilon$. Hence, we have better dependency on $\epsilon$ in a small $\epsilon$ regime with a stronger assumption. That being said, we have the following concern on the proof reasoning in [A1]. In the proof of Lemma 2 which is the key lemma to obtain a tighter bound, it is argued that $N_1,\dots,N_B$ are independent and identical even after conditioning on $x_{k\eta}$ and $x_t$. However, it seems that this is not true because permutation invariance does not necessarily result in independence. More precisely, it should be mentioned that the marginal distribution of $N_i$ are the same and the expectation of their average can be replaced by expectation of one of them. (We think the assertion itself is correct.) We would be happy to clarify any concerns or answer any questions that may come up during the discussion period. --- Rebuttal Comment 1.1: Title: Thanks for the Response Comment: Thank you for the response. This clarifies my questions. Bumping my score to a 6.
Summary: In the work authors study mean field Langevin dynamics under stochastic gradient updates and prove uniform in time propagation of chaos that takes into account discretisation, stochastic errors which allows to establish convergence rates of MFLD. Strengths: 1. Strong theory with explicit bounds 2. Excellent outline of results 3. Thorough comparison of convergence rates with other approaches 4. Wide applicability of results to different sg estimators and neural networks in mean field regime. Weaknesses: Seeing some numerical evaluation could make results even stronger to support the theory. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: - Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 4 excellent Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the positive evaluation. Please find our answer to your concerns. **Q:** Seeing some numerical evaluation could make results even stronger to support the theory. **A:** Thank you very much for your suggestion. Since the theoretical part already occupies the whole part of the paper, we chose not to include the numerical experiments in the main text. Following your suggestion, we will add numerical experiments in the appendix. Indeed, our preliminary experiments shows good convergence with respect to the number of particles and the stochastic gradient also converges properly. We would like to add more thorough experiments in the final version. We would be happy to clarify any concerns or answer any questions that may come up during the discussion period. --- Rebuttal Comment 1.1: Comment: Dear reviewer, The author-reviewer discussion period ends in 2 days. Please review the authors' rebuttal and engage with them if you have additional questions or feedback. Your input during the discussion period is valued and helps improve the paper. Thanks, Area Chair
Summary: This paper studies the mean-field Langevin dynamics (MFLD) with stochastic gradient updates. In particular, the authors propose a general framework to prove a uniform-in-time propagation of chaos for MFLD. The authors establish the convergence rate guarantees to the regularized global optimal solution, simultaneously addressing the uses of particle approximation, time discretization and stochastic gradient. Strengths: This work provides a quite general framework for analyzing the mean-field Langevin dynamics (MFLD) with stochastic gradient updates. The authors establish the convergence rate guarantees to the regularized global optimal solution, simultaneously addressing the uses of particle approximation, time discretization and stochastic gradient. The results are interesting and appear to be novel. Various examples and practical implementations of MFLD are also provided to demonstrate the effectiveness of the proposed framework. Weaknesses: Purely theoretical paper. No numerical experiments are provided. Technical Quality: 3 good Clarity: 3 good Questions for Authors: It is suggested that the authors should provide some simple numerical experiments to demonstrate the effectiveness of the proposed algorithms. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your positive evaluation. We address the technical comments below. ### Weaknesses: **Q:** Purely theoretical paper. No numerical experiments are provided. **A:** Since the theoretical part already occupies the whole part of the paper, we chose not to include the numerical experiments in the main text. Following your suggestion, we will add numerical experiments in the appendix. Indeed, our preliminary experiments showed good convergence with respect to the number of particles even with stochastic gradient approximation. ### Questions: **Q:** It is suggested that the authors should provide some simple numerical experiments to demonstrate the effectiveness of the proposed algorithms. **A:** Thank you for your suggestion. We have already done a preliminary experiment. We would like to include more thorough experiments in the appendix of the camera-ready version. We would be happy to clarify any concerns or answer any questions that may come up during the discussion period. --- Rebuttal Comment 1.1: Comment: Dear reviewer, The author-reviewer discussion period ends in 2 days. Please review the authors' rebuttal and engage with them if you have additional questions or feedback. Your input during the discussion period is valued and helps improve the paper. Thanks, Area Chair --- Rebuttal Comment 1.2: Title: Reply to Rebuttal Comment: This is overall a very interesting work, regardless of the addition of numerical experiments. Adding experiments is just a suggestion. Also as a more theoretical researcher, I personally think that the theoretical contribution of the paper is sufficient for me to champion its acceptance. With numerical experiments, I would say the goal is to reach a broader audience and raise the impact of the paper, especially for large conferences like NeurIPS.
null
NeurIPS_2023_submissions_huggingface
2,023
Summary: This paper provides a set of results to analyze convergence of mean-field Langevin dynamics with a set of related algorithms. These results are in discrete time and space. Using log-Sobolev inequality techniques from optimal transport, which creates the opportunity for extensions to other settings. The propagation of chaos results are also proved in discrete time. Strengths: My feeling is that this paper provides a useful advance in its effort to systematically control discretization error in the propagation of chaos. I am not familiar with other results that have done this. The rather explicit results obtained for the mean-field dynamics settings illustrates favorable scaling for single loop algorithms, at least in sufficiently smooth problems. Weaknesses: I think the differences from Nitanda and Chizat could be articulated more clearly. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: Assumption 4 is rather complicated. Can the motivation for it be explained more clearly in the text? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 4 excellent Limitations: LSI conditions are obviously hard to satisfy for some problems, but this is well-known and well-discussed in the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your supportive comments. We address the technical comments below. ### Weaknesses **Q:** I think the differences from Nitanda and Chizat could be articulated more clearly. **A:** Thanks for your suggestion. Please notice that the differences from Nitanda et al. (2022) and Chizat (2022) are summarized in the Introduction section. That is, they did not derive (1) the finite particle approximation error of MFLD and (2) the stochastic gradient approximation error. On the other hand, we established a unifying framework to evaluate the time discretization error, the finite particle approximation, and the stochastic gradient approximation. One of our contributions is to derive a tight bound on the stochastic gradient approximation error in the propagation of chaos analysis, which is challenging because it requires the evaluation of correlation between the randomness of each gradient and the updated distribution. ### Questions **Q:** Assumption 4 is rather complicated. Can the motivation for it be explained more clearly in the text? **A:** Assumption 4 is essentially a higher-order smoothness condition, which is required to obtain an improved bound on the stochastic gradient approximation, i.e., smoother loss yields better stochastic approximation -- we will discuss this in more details in the revision. As an example, Assumption 5 (ii) and Assumption 6 (ii) are sufficient conditions for Assumption 4 in each corresponding situation (gradient estimator). We believe that these conditions are rather intuitive because they only impose boundedness of the second order derivatives. We would be happy to clarify any concerns or answer any questions that may come up during the discussion period. --- Rebuttal Comment 1.1: Comment: Dear reviewer, The author-reviewer discussion period ends in 2 days. Please review the authors' rebuttal and engage with them if you have additional questions or feedback. Your input during the discussion period is valued and helps improve the paper. Thanks, Area Chair
null
null
null
null
null
null
Intervention Generalization: A View from Factor Graph Models
Accept (poster)
Summary: This paper studies the problem of generalization in causal inference. In particular, it extends the factor graph to interventional factor graph (IFM). It shows when can such model be identified as well as proving practical algorithm for learning. The setting assumes knowing the factorization, this type of structural knowledge has its advantages and limitations as well. Strengths: The paper is well motivated with good theoretical results and empirical experiments. Weaknesses: 1. This paper does not provide too much real-world example of the interventional factor model. The simulation is also semi-synthetic. I am a little concerned about the applicability of such model. 2. There seems to be a disconnect between the identifiability results and learning algorithms (see the questions section). Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. Because the term junction tree is explicitly mentioned in theorem 3.1, it would be good to have a brief definition of it? 2. Can you elaborate a bit more on the algebraic formulation of this problem? This section is a bit harder to digest. 3. I am a little confused about the usefulness of theorem 3.1 and 3.2. As authors mentioned in the first paragraph of Section 4, they do not use it to product of density ratios but rather use deep energy-based model or IPW. Are the identifiability results in 3.1 and 3.2 necessary for these methods to work? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: Not applicable Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for commenting that our paper is "well motivated with good theoretical results and empirical experiments"! **Real-world applicability:** coming from a more domain-specific theory, such structures also emerge from models of equilibrium. Consider this example that can be found in [8], Section 3.1.1, where $f$ denote differential equations at equilibrium, $X$ denotes observed random variables, $U$ denotes (mutually independent) latent variables and $I$ denotes an intervention indicator: $$f_I: X_I - U_I = 0$$ $$f_D: U_1(X_I - X_O) = 0$$ $$f_P: U_2(gU_3X_D - X_P) = 0$$ $$f_O: U_4(U_5I_KX_P - X_O) = 0$$ After marginalizing the $U$ variables, what we get is an IFM (this is not the whole story, though, as other non-graphical constraints may take place given the particular equations. [8] discusses $U_I$ being marginally independent of $I_K$ on top of the above). In general, energy-based models are to be interpreted as conjunctions of soft constraints, the factor graph is one implication of a system of stochastic differential equations, and interventions denote change to particular constraints. A SDE model may have *other* assumptions on top of the factorization, and parameters which carry particular meaningful interpretations, but this fits well with our claims that the IFM is a "minimalist" family of models in terms of structural assumptions - a reasonably conservative direction to follow particularly when the dynamics of many natural phenomena cannot be (currently) measured at individual level, as it's the case of much of cell biology data. [33] has further elaborations on some of these ideas. **Usefulness of Theorems 3.1 and 3.2:** it is clear that we should have been more explicit in the transition between these sections. Section 3 is about identification methods, and those don't necessarily need to translate directly into estimation methods. By analogy, think of the formulas implied by the do-calculus or G-computation, and how it's not necessarily the case they are used in estimation methods by plugging-in a corresponding estimate of a density. That is, the proofs in Section 3 are constructive, but this doesn't imply they should lead to a one-to-one correspondence with a learning algorithm (for instance, G-estimation looks very different from what a direct approach based on G-computation may suggest). As long as we know that we can get $\Sigma_{test}$ by products of ratios of densities in $\Sigma_{train}$, it doesn't matter whether we can identify each factor as long as all densities follow the common factorization that shares factors, so a purely likelihood-based approach suffices (we did try density ratio estimation methods, but they worked poorly out-of-the-box and we decided to omit further discussion on them). We will clarify this in a final manuscript by a more fleshed out bridge between Sections 3 and 4. All problems in Section 5 can be identified based on the fact that it satisfies the conditions of Theorem 1, as there is no more than one $\sigma$ per factor and $\Sigma_{train}$ spans all necessary elementwise support for each $\sigma$ variable (a scenario we mention in Section 3). We will comment on this explicitly in a revised manuscript. **Algebraic formulation:** essentially, we need to find a vector $\{q_i\}$ such that $p(x; \sigma) \propto \prod_{k = 1}^t p(x; \sigma^i)^{q_i}$, or show that there is no solution. With $x$ fixed, this is done by treating each factor $f_k(x_{S_k}, \sigma_{F_k})$ as a symbol in an algebraic system. This means that each density, up to a multiplicative constant, is a monomial in those symbols. For instance, in the example in Figure 3, we use the symbol $f_1^{00}$ to denote the factor $f_1(x; \sigma_1 = 0, \sigma_2 = 0)$, while e.g. $f_3^{10}$ denotes $f_3(x; \sigma_1 = 1, \sigma_3 = 0)$ and so on. So the monomial corresponding to $p(x; \sigma = (1, 1, 1))$ is $f_1^{11}f_2^{11}f_3^{11}$ and so on. Now, a PR transformation from the set of 7 training densities in Figure 3 (all configurations of binary $\{\sigma_1, \sigma_2, \sigma_3\}$ except for the test configuration $\sigma_1 = \sigma_2 = \sigma_3 = 1$) is of the form $$(f_1^{00}f_2^{00}f_3^{00})^{q_1} \times (f_1^{01}f_2^{10}f_3^{00})^{q_2} \times \dots \times (f_1^{10}f_2^{01}f_3^{11})^{q_7}.$$ For an algebraic identity between $f_1^{11}f_2^{11}f_3^{11}$ and the above to hold (up to a multiplicative factor), it is necessary that $q_1, q_2, \dots, q_7$ are such that, in the resulting multiplication, the resulting exponents for $f_1^{11}$, $f_2^{11}$, and $f_3^{11}$ are 1, and all others are zero. For instance, $f_1^{00}$ appears in the regimes in lines 1 and 5 in the table of Figure 3. Hence, we must have $q_1 + q_5 = 0$. Symbol $f_1^{01}$ appears in lines 2 and 6, and therefore we must have $q_2 + q_6 = 0$. This goes on for all symbols (columns in the table), ending with $f_3^{11}$, which only appears in the final density, making $q_7 = 1$. This gives a system with one solution, the one shown in the caption. The admittedly ugly Eq. (4) is a compact way of describing this system of equations. **Brief definitions:** provided with the opportunity to write a camera-ready version of this paper, we will have one extra page and we will be able to move selected and summarized definitions from the appendix into the main body, including that of a junction tree. --- Rebuttal Comment 1.1: Comment: Thanks for your reply! Your responses have addressed my concerns and questions thoroughly. I will keep my score of 6 and still lean towards acceptance.
Summary: This work proposes the use of factor models as a graphical causal model to generalize from past experiments. The authors introduce factor models, describe their relative merit then describe how a factorization can be derived for a given intervention, and give approaches for estimation using deep energy based modeling and weighting approaches. Experimental results are shown which provide a comparison of the performance of the proposed approach to other approaches (DAGs and black box estimation) Strengths: * This is a very interesting idea, and I generally agree with the authors' central claims around the opportunities provided by using factor models for causal inference. * The addition of conformal inference for uncertainty intervals here is a nice, and elegant addition to the paper. * The authors do a nice job of providing a thorough empirical evaluation of the proposed approach Weaknesses: One of the weaknesses of factor models is that it is more difficult to perform inference than in DAGs where there is a simple factorization that can be exploited. Factor models are also less immediately interpretable than DAG models. While this isn't necessary a problem in itself, it would be useful if there was a more plain discussion about the tradeoffs involved in this representation. I also found the presentation to be a little difficult to follow. There are a number of missing discussions that would be useful to contextualize the proposed approach in the broader literature (e.g., ADMGs, segregated graphs, gated factor graphs). It's also a little unclear to me whether there is a sound and complete identification algorithm here. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: * Is the set of identifiable estimands comparable to other graphical causal models like ADMGs? * For these estimation approaches (and more generally) is it possible to provide a sense of convergence/consistency of the causal parameters? * Given the experimental results where there does not seem to be a clearly preferable approach in all settings, how should someone decide when it is appropriate/necessary to employ an IFM? * Can you provide a discussion of the current proposal with "Causality with Gates" by Winn (AISTATS 2012). I can see that there are significant differences between these two texts, but given that they are both concerned with the use of factor graphs for causal inference I think it should be discussed. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for finding our idea to be "very interesting", implemented with "elegant additions" on uncertainty quantification and a "through empirical evaluation"! In what follows, we address your questions. **Trade-offs in the representation:** one of the main defining features of DAGs is the natural presence of marginal independencies between intervention variables and random variables (a random variable $X_i$ is independent of an intervention variable $\sigma_j$ when its distribution doesn't change as we choose different levels for $\sigma_j$. See [17] for further discussion about what "independence" means in the context of non-random intervention variables). That "the future doesn't cause the past" is one of the sources of this type of independence. In contrast, no marginal independencies of this type are structurally encoded in a (connected) IFM. This is more natural in a scenario where a design vector $\sigma$ is set at the initial stage in a process, and the system is runs towards some equilibrium. No element $\sigma_i$ in the intervention vector is decided based on outcomes caused by the other $\sigma_j$ in this vector. This is what happens in many datasets such as Sachs et al. Although traditionally DAGs have been applied to it, the nature of the data sampling mechanism, and the feedback generative mechanism itself, puts such a construction in dispute. References [8] and [32] have a discussion on such issues, where directed components are still used - but even there, we believe there is much to appreciate about the relative simplicity of an IFM, leading to both more tractable theoretical and practical analyses when integrating data from multiple jointly-intervened design vectors. DAGs are also natural where there is a sequential plan, i.e., instances of closed-loop control where an action is chosen based on the outcomes of previous actions. Even in this case, cross-sectional IFMs could be combined in a sequence, as an instance of chain graph modelling where a design vector is jointly decided at different stages, a next equilibrium distribution is achieved, and a consecutive round of decisions is taken based on the previous equilibrium. So, we envision that a natural extension is a chain-graph combination of directed components and IFMs. Data for that is not as easy to find as for a single-shot IFM, hence our focus here. **Soundness and completeness:** an IFM is definitely not meant to capture possible identification conditions that rely of marginal independencies, which can be exploited by DAG-based models. Likewise, the closest DAG relaxation of an IFM model will fail to capture relevant constraints (e.g., a bivariate model given by a copula $c(x_1, x_2)$ and marginals $f_1(x_1; \sigma_1)$, $f_2(x_2; \sigma_2)$). We also define a broad family of transformations (PR transformations), which we conjecture informally as encompassing all symbolic mappings from training to test regimes. Within PR transformations, we provide a sound and (measure one) complete method to show identifiability. **When to decide on the use of an IFM:** following the above discussion, it is clear that the choice of family matters. Our DAG-based simulations were designed to make the DAG alternative competitor look as good as possible, matching the exact additive-error conditionally Gaussian family. In practice, off-the-shelf goodness-of-fit techniques, even if using the baseline regime only, should provide diagnostic tools for this choice. Even plotting simulations from the fitted model $p(x|\sigma^0)$, compared against the training data, already provides strong indications of what each model can accomplish. **Estimation:** in the sense of Fisher (pointwise) consistency, learning densities with the postulated factorization, and supervised methods for mapping $X$ to $Y$, will guarantee consistency for any $\mathbb{E}[Y; \sigma]$ identifiable from the conditions in Section 3. This is true even if not all individual factors in the factorization are identifiable (up to a multiplicative constant): the constructive identification results bind training densities to test densities; having the required factorization and support is what matters. We agree it would be interesting to have other inferential tools for the causal parameters, e.g., rates of convergence. It is not our goal to address it here but we hope this fosters further work in the community, particularly considering that the causal functionals of interest, $\mathbb{E}[Y; \sigma^\star]$, are relatively simple. **Winn (2012):** many thanks for the reference! It is an early example of how factor graphs can be used to encode dependencies between variables in a causal graph and well worth discussing in our paper. We highlight that our focus is on explicit modeling of non-atomic interventions and identifiability of out-of-distribution regimes, while Winn (2012) sets the stage on how do-calculus applied to DAGs can be translated to a type of factor graph, although it must rely also on context-specific independencies (via the "gates" variation of a factor graph model). An excellent pointer to be added to our references. --- Rebuttal Comment 1.1: Comment: Thank you for your thoughtful response, and apologies for a delayed reply. I appreciate your framing on the generality of factor models versus DAGs, however it still isn't entirely clear to me when a practitioner should prefer to use the factor graph over existing frameworks, especially since identification is out of scope of this paper (contrast this to e.g., ADMGs which do admit identification). With that being said, I feel that the authors response does alleviate at least some of my concerns and I am upgrading my score to reflect this. --- Reply to Comment 1.1.1: Title: Thank you! Comment: Thanks for the further consideration, we are mindful of real-world constraints and we appreciate the feedback at any stage! To be honest, we are not totally clear which missing identification results are being referred to, but no hurry. Any further detail you may be able to provide us at some point at the time decisions are released will be welcome and we will take them into full consideration. Many thanks again! --- Rebuttal 2: Title: Can we help with more comments? Comment: Thank you for all the feedback so far. Please do let us know if there is anything else for us to address in our rebuttal. All the best, the authors
Summary: The authors present the interventional factor model, a more general formalization used to predict the effect of treatment on an outcome in unseen regimes. It is more general than existing formalizations since it does not assume causal graphs to be directed acyclic graphs (DAGs). The authors show in this new formalization what are the conditions to ensure identifiability of treatment effects. Finally, the authors propose several methods that can estimate treatment effects and show their effectiveness with semi-synthetic experiments. Strengths: The article is well written. It proposes a really general formalization using factor graphs that is original. It encompasses identification results, several algorithms, and experiments. Weaknesses: The contribution is limited in the sense that identification results for DAGs are already well established (do-calculus and $\sigma$-calculus). This work address the more general case where the graph is not necessarily a DAG. It supposes that the graph is known, but this assumption is strong in practice since it is challenging to know these kinds of general graphs both from the expert knowledge and structure learning perspective (more than DAGs). Real-world applications could surely help motivate the use of this formalization. The present semi-synthetic experiments are interesting, however the simple black-box baseline method is overall performing really well, undermining the use of the more involved proposed methods. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Minor typos or style suggestions: - line 39: "we submit" => "we argue" or "we claim" - line 46: use a colon instead of a full stop - line 59: why use the aleph symbol, could another common letter be used instead? - line 204: "Equ." => "Eq.". It is more common and "Eq." is used for all the other references to equations. - line 350: citation 2 is repeated Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: Yes the limitation have been addressed, and the societal impact is not really applicable. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Many thanks for your comments, and by finding our article "well written" and "original"! **On identification/elicitation/learning:** we are not claiming that such a family is universally superior to DAGs, but there is no shortage of domains where energy-based formulations are more natural than directed representations, such as in data composed of snapshots of an equilibrium distribution (as in many cell biology applications). In particular, a set of equilibrium equations $$f_k(X_{S_k}, \sigma_{F_k}, U_k) = 0,$$ where $f_k$ is a stochastic differential equation and $U_k$ is a set of hidden variables, is an IFM with hidden variables and partially deterministic factors. A full-blown SDE is likely to be challenging to specify (even more so if no dynamics are observable), but the factorization implied by it can still be used to set up an IFM. We have amended the text to motivate our method with practical use cases of this form. Regarding learning/elicitation of energy-based model structures, if an expert or structure learning algorithm is able to answer queries about Markov blankets, then that's already sufficient. A minimal procedure is as follows: start with a fully connected undirected graph among all variables. For all pairs of variables $(V_i, V_j)$, ask: "is $V_i$ conditionally independent of $V_j$ given all other variables? If yes, then remove the $(V_i, V_j)$ edge". When done, transform each clique in this graph into a factor in a IFM. That's it, but bear in mind that the definition of "conditional independence" for two regime indicators $\sigma_i$ and $\sigma_j$ is non-standard; see Section 7 of Constantinou and Dawid, "Extended conditional independence and applications in causal inference" (Annals of Statistics, 2017). For structure learning methods, we do need to bear in mind that only a limited set $\Sigma_{train}$ of interventional configurations are available, and it's possible that independencies within other ranges found in $\Sigma$ won't hold. Where data is lacking for sufficient levels of $\sigma_i$ variables, knowledge of physical/spatial structure can play a role, suggesting which direct interactions between interventions and random variables can be safely discarded or not. After all, without this knowledge, not even $do$ operations could be linked to real data. **Black-box baseline method:** The black-box algorithm did perform competitively in a fair number of experiments. This is just a matter of fact that we encountered after running them. There is no expectation that it will work in general (it's not meant to), and we did not want to perform dataset-selection of any kind. **Notation:** Thank you very much for the detailed comments about style and presentation, we will implement them (we are fond of $\aleph$ for denoting cardinality, but if this causes confusion with its more fundamental uses in set theory, we have no problem changing it). --- Rebuttal Comment 1.1: Comment: I extend my gratitude to the reviewers for their comprehensive response. I tend to think that this paper should be accepted.
Summary: This paper consider how to use data from past interventions to allow it to generalize to new unseen interventions. This is an important practical problem to consider, as running additional experiments is often costly/infeasible. In order to tackle this problem, the authors consider a graphical models approach, specifically they consider an interventional factor graph model (IFM). Using posit an IFM factorization of the the density $p(x; \sigma)$. Then they provide the sufficient conditions for identification, and provide a message passing algorithm to do so. Next, they discuss multiple approaches to estimate the density based on ML models, as as well via IPW methods. They also consider a covariate shift regression approach. Further, they provide a conformal inference approach to establish coverage. Lastly, they perform a number of experiments to establish the empirical efficacy of their method. Strengths: The paper tackles a very important problem, how can we use past interventions to generalize to new interventions. They propose an innovative IFM model and message passing algorithm to establish identification for this problem. Viewing this problem under this lens is an interesting one, and potentially of practical use. Further, I appreciate the authors providing examples so that its easier to understand their identification argument. I also appreciated the authors providing multiple methods for estimating the density discussed earlier. I believe providing multiple approaches is often of great practical use since no one algorithm often works in all scenarios. In terms of empirical evaluation, I think its interesting that the authors used semi-synthetic data. I believe this is good practice, and should be followed more often. Weaknesses: Presentation: Presentation of both the regression and coverage algorithm is confusing, and seems to require a lot of additional knowledge on behalf of the reader. I do not fully understand how these algorithms proceed. For example, the deep-energy based models, lines 237-239 are very unclear. Making it clear what exactly is being fit would be very useful. More generally, being clear and rigorous regarding these things will go a long way in making the paper clearer. Empirical Evaluation of Coverage: I did not see any simulations to this effect. Empirical Evaluation: It is not clear to me what exactly X is in these datasets. Once again, being clear and rigorous about these details will enhance understanding, and give the reader a chance to appreciate the empirical evaluation. Further, even after reading the appendix, I do not understand how the outcomes were generated. Could the authors please clarify empirical details in the rebuttal? Comparison to related work: The comparison to [2] is incorrect. The authors claim that a series of works including that of [2] requite data to be collected for all regimes in $\Sigma_{\text{test}}$. This is not the case. For example, the experimental design section of [2] shows that this isn't the case. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: I have made some suggestions in the weaknesses section. I list some other questions here. Deep-energy based models: Does fitting the parameter vector $\theta_{k, \sigma_{F_{k}}}$ require knowledge of the set of variables in $F_k$? I am confused by this, and if it does require, how do we determine these variables in practice. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: Yes, they have. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for all of your feedback, for the kind words on this being a "very important problem" and "innovative" in this solution! Moreover, thanks for asking clarification questions that will definitely improve on the presentation of the paper. **Method clarification:** to fit a deep energy model, proceed as follows: 1. Each log-factor $\phi_{\theta_{k, \sigma_{F_k}}}(X_{S_k})$ is given by a MLP with a chosen number of hidden units. It is a different MLP for possible value of $\sigma_{F_k}$. For instance, in our empirical studies, there is at most one $\sigma_i$ variable associated with each factor. If $\sigma_{F_k}$ is non-empty, then we have two MLPs for factor $k$, which are $\phi_{\theta_{k, \sigma_{F_k} = 0 }}(X_{S_k})$ and $\phi_{\theta_{k, \sigma_{F_k} = 1}}(X_{S_k})$. Set $\theta_{k, \sigma_{F_k} = 0}$ are the weights and biases of that corresponding MLP/local regime. 2. We fit all parameters $\theta_{k, \sigma_{F_k}}$ by pseudo-log-likelihood. This means maximizing the sum of the univariate conditional log-likelihoods over all datasets $\mathcal D^i$ as $$\sum_{i = 1}^t \sum_{j = 1}^{n_t} \sum_{q = 1}^p \log p_\theta(x^{ij}_q|x^{ij}_w; \sigma^i),$$ where $x^{ij}_q$ is the $q$-th column of the $j$-th data point of $\mathcal D^i$, and $x^{ij}_w$ are the remaining columns of the same row of the same dataset. Each entry in the sum above can be given as the negative energy function minus the corresponding log normalizing constant, which requires summing over the possible values of $X_q$ only *(apologies for not writing the equation explicitly, but after a long fight with apparent OpenReview's Markdown bugs, it seemed impossible to write the log-sum-exp and sub/superscripts we wanted to write. We will write it explicitly in the final manuscript)*. The above is then optimized by Adam, a gradient-based algorithm, and each MLP has a single hidden layer with a hyperbolic tangent activation function and a linear output layer. We fit a model for $f(X) := \mathbb{E}[Y|X]$ using a supervised learning method. In our experiments, another MLP. At test time, we evaluate any $\mathbb{E}[Y; \sigma^\star]$ by Monte Carlo, that is, we generate a large sample from the learned $p(x; \sigma)$ by Gibbs sampling, then average $f(X)$ over this sample. Simulations of coverage are very expensive (it requires generating at least dozens of datasets from the same model, and fitting a deep-energy based model to each to get a single evaluation). We have begun work on this, but experiments are ongoing and will not be ready ahead of the rebuttal period deadline. Our final manuscript will include coverage results either in the appendix or main text, if space allows. Without prior knowledge of what pseudo-likelihood means, we agree that it is tedious to get this from the Julia code provided in the supplement, and we will tweak the supplementary material to add the above. As we say in the main text, we have no reason to be attached to pseudo-likelihood, and one should feel free to use other methods for energy-based learning such as score matching, contrastive divergence, etc. **What exactly $X$ is in these dataset:** In the datasets, the intermediate variables $X$ refer to concentrations of metabolites in cells, such as lipids or, in the case of DREAM, (simulated) gene expression. The Jupyter notebook and the original references provide a description of the variables; see also Appendix D. **Outcome models:** they are artificially generated as follows for each of the 100 simulated problems in each of the two studies. Basically, for each of the 100 cases studies we artificially generate a vector $\lambda_{true}$ and simulate outcome data as $$Y = \tanh(\lambda_{true}^{\mathsf T}X) + \epsilon_y.$$ The scale of $\lambda_{true}$ and the variance of $\epsilon_y$ are set so that $var(\lambda_{true}^{\mathsf T}X)$ under the baseline regime is about 1.5 to four times $var(\epsilon_y)$, with the ratio chosen uniformly at random from $[1.5, 4]$. For each regime $\sigma$, we then numerically compute the ground truth $\mathbb{E}[Y; \sigma]$ by Monte Carlo. For more details, we refer reviewers to the section "Synthetic Ground Truth and Training Data Generation" in the Jupyter notebook `starting_demo.ipynb`, where this process is described more explicitly without requiring much code reading. **"...knowledge of the set of variables in $F_k$...":** Yes, it does. How to get it? Similar to stages in causal discovery algorithms that start from an undirected network (like the PC algorithm), we can construct an undirected network that starts fully connected, and repeatedly ask an independence oracle (human expert or statistical test) whether two variables $V_i$ and $V_j$ are conditionally independent given all other variables. Special attention should be paid about the meaning of "conditional independence" when speaking of two intervention variables, see e.g. Section 7 of Constantinou and Dawid, "Extended conditional independence and applications in causal inference" (Annals of Statistics, 2017). This is how Figure 1c comes to be if we query an independence constraint oracle that follows the independencies entailed by the DAG in Figure 1a. The cliques in this graph (ignoring the directionality) can conservatively be translated to factors in a factor graph, as in Figure 1b. (There are more details about this in the response to T46G). **Comparison to [2]:** excellent point, we intended to say that about [23]. Approach [2] is very interesting but with widely different assumptions (and data requirements), more akin to sparse ANOVA in Fourier space if we were to give a very simplified description of it. We attempted to fit our very-sparse-design data with their code, but we did not succeed in getting meaningful stable results -- to be fair, these are not cases where [2] is meant to be used anyway. We will amend the text to clarify this point. --- Rebuttal Comment 1.1: Title: Response to Rebuttal Comment: Thank you for the comments, and clarifications. I still believe that a score of 7 is a fair score of this paper, and I will keep it as such.
Rebuttal 1: Rebuttal: Thank you all for such detailed and helpful reviews! Getting the time and attention of no fewer than six experts is no small privilege. Although there is some overlap among questions, it's not too substantive. As there are many reviewers, we think it will be most convenient for them if we make our rebuttals self-contained, so there will be some degree of repetition in the replies. Again, many thanks for your help!
NeurIPS_2023_submissions_huggingface
2,023
Summary: This paper maps from available observational and experimental datasets to unseen interventional distributions given the factorization of the joint distribution of the intervened system. They utilize an interventional factor model equipped with factor graphs to provide necessary and sufficient conditions for causal effect identifiability. Finally, they provide some practical algorithms to estimate the final outcomes. Strengths: The authors provided good examples of different concepts in the main paper and in the appendix. It's a different approach from approaches that use causal graphs but is quite interesting. The paper has a nice flow in the writing and is easy to read. The explanations and detailed examples provided in the appendix are worth appreciating, and they are quite helpful for readers. Weaknesses: Main weakness: * Some concepts such as junction tree, hypervertex, and message passing algorithms should have been defined with short examples in the main paper since they are used in the main paper theorems. * Multiple approaches have been described in sections 3 and 4 and but most of them were not explained with enough details and intuition. * The main contribution seems a little unclear. The authors first discussed two approaches in section 3. Later in section 4, they mentioned that in practice using deep energy-based models works better. They suggested employing a differentiable black box to learn parameters for each factor and estimate E[Y| x]. Using a black box is not completely novel. It appears that the approaches in section 3 are not very useful and thus the authors are proposing three more methods that work better in practice. I would request the authors to clarify the mentioned issues. * In the experiment section, the authors completely ignore the approaches they discussed in Theorem 3.1 or Theorem 3.2. They considered the deep learning approach as the best version and compared the benchmarks with that. Minor comments: * Some concepts in sections 1 and 2 are used without proper definitions and examples. Readers would need to know those background knowledge beforehand to go with the flow of the paper. * Line 119: Unmeasured confounders are used without any definitions or proper examples. * Line 207: Theorem 3.2 seems less intuitive. A proof sketch in the main paper would be appreciated. * Line 227: The approach “Deep energy-based models and direct regression” should be described in more detail since this approach worked better than other approaches. * Conformity scores are not defined in the main paper although has been used in theorem 4.1. I have read the author's rebuttal. The authors resolved some of my concerns. But I am not confident enough to increase the scores. Technical Quality: 3 good Clarity: 3 good Questions for Authors: I would request the authors to provide explanations for the previously mentioned main weaknesses and answer the following questions: * For the example in line 74, why is the DAG $\sigma \rightarrow X$, $\sigma \rightarrow Y \leftarrow X$, not an option? * Are the authors refuting the utility of theorem 3.1 and 3.2 and adopting only the new approaches provided in section 4? * How is this paper dealing with cycles and confounders? The authors should mention that more explicitly. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: If the variables have more than two states or we have more variables, the number of interventional datasets will also be significantly high. In real-life applications, collecting these many interventional datasets are not feasible or possible. The authors should discuss this challenge in detail. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for commenting that our paper is "quite interesting" and that it "has a nice flow in the writing and is easy to read". Much appreciated! In the following, we address your comments and questions. **"Some concepts... should have been defined... .":** We were struggling for space and we decided that concepts which are present in textbooks would be referred to without explicit definition in the main body of the text, but we fully agree that this isn't ideal. As a possible camera-ready allows for another page, we will take this opportunity to use it for modifications like that if given the opportunity. **"...details and intuition.":** We tried to present a minimal example of the junction tree approach in lines 151-159. We will flesh it out a bit more, and point more explicitly to Appendix A where a more complex example is given, assuming the opportunity to provide an accepted camera-ready version. Likewise, we will explicit write further intermediate steps into the explanation of the algebraic method in the caption of Figure 3. It can be explained as follows. Essentially, we need to find a vector $\{q_i\}$ such that $p(x; \sigma) \propto \prod_{k = 1}^t p(x; \sigma^i)^{q_i}$, or show that there is no solution. With $x$ fixed, this is done by treating each factor $f_k(x_{S_k}, \sigma_{F_k})$ as a symbol in an algebraic system. This means that each density, up to a multiplicative constant, is a monomial in those symbols. For instance, in the example in Figure 3, we use the symbol $f_1^{00}$ to denote the factor $f_1(x; \sigma_1 = 0, \sigma_2 = 0)$, while e.g. $f_3^{10}$ denotes $f_3(x; \sigma_1 = 1, \sigma_3 = 0)$ and so on. The monomial corresponding to $p(x; \sigma = (1, 1, 1))$ is $f_1^{11}f_2^{11}f_3^{11}$. Now, a PR transformation from the set of 7 training densities in Figure 3 (all configurations of binary $\{\sigma_1, \sigma_2, \sigma_3\}$ except for the test configuration $\sigma_1 = \sigma_2 = \sigma_3 = 1$) is of the form $$(f_1^{00}f_2^{00}f_3^{00})^{q_1} \times (f_1^{01}f_2^{10}f_3^{00})^{q_2} \times \dots \times (f_1^{10}f_2^{01}f_3^{11})^{q_7}.$$ For an algebraic identity between $f_1^{11}f_2^{11}f_3^{11}$ and the above to hold (up to a multiplicative factor), it is necessary that $q_1, q_2, \dots, q_7$ are such that, in the resulting multiplication, the resulting exponents for $f_1^{11}$, $f_2^{11}$, and $f_3^{11}$ are 1, and all others are zero. For instance, $f_1^{00}$ appears in the regimes in lines 1 and 5 in the table of Figure 3. Hence, we must have $q_1 + q_5 = 0$. Symbol $f_1^{01}$ appears in lines 2 and 6, and therefore we must have $q_2 + q_6 = 0$. This goes on for all symbols (columns in the table), ending with $f_3^{11}$, which only appears in the $7^{th}$ training density, making $q_7 = 1$. This gives a system with one solution, the one shown in the caption. The admittedly ugly Eq. (4) is a compact way of describing this system of equations. **"It appears that the approaches in section 3 are not very useful...":**. It is clear that we should have been more explicit in the transition between these sections. Section 3 is about identification methods, and those don't necessarily need to translate directly into estimation methods. By analogy, think of the formulas implied by the do-calculus or G-computation, and how it's not necessarily the case they are used in estimation methods by plugging-in a corresponding estimate of a density. That is, the proofs in Section 3 are constructive, but this doesn't imply they should lead to a one-to-one correspondence with a learning algorithm (for instance, G-estimation looks very different from what a direct approach based on G-computation may suggest). As long as we know that we can get $\Sigma_{test}$ by products of ratios of densities in $\Sigma_{train}$, it doesn't matter whether we can identify each factor as long as all densities follow the common factorization that shares factors, so a purely likelihood-based approach suffices (we did try density ratio estimation methods, but they worked poorly out-of-the-box and we decided to omit further discussion on them). We will clarify this in a final manuscript by a more fleshed out bridge between Sections 3 and 4. All problems in Section 5 can be identified based on the fact that it satisfies the conditions of Theorem 1, as there is no more than one $\sigma$ per factor and $\Sigma_{train}$ spans all necessary elementwise support for each $\sigma$ variable (a scenario we mention in Section 3). We will comment on this explicitly in a revised manuscript. **"dealing with cycles and confounders?":** We do not model confounders/cycles explicitly. As the only information that matters is the Markov blanket, the presence of an edge may be due to confounders or cycles or "directed dependence", and for better or worse that's all treated on the same footing. For instance, if the generative model happens to be a DAG with one latent variable $U$ and edges $$X_1 \leftarrow U, X_1 \leftarrow \sigma_1;$$ $$X_2 \leftarrow U, X_2 \leftarrow X_1, X_2 \leftarrow \sigma_2;$$ $$X_3 \leftarrow X_2, X_3 \leftarrow \sigma_3,$$ then one factorization of the above after marginalizing $U$ is defined by two factors, $f_1(x_1, x_2; \sigma_1, \sigma_2)$, and $f_2(x_2, x_3; \sigma_3).$ The presence of $U$ makes identification harder due to the induced interaction of $\sigma_1$ and $\sigma_2$, but the implied factorization by itself doesn't introduce structural constraints not found in the original DAG. With more confounders, we may get a fully uninformative model, see the **Scope and Limitations** section. **"Why isn't [this DAG] an option?":** It may certainly be the case that a system has edges from $\sigma$ to $Y$. We don't cover this case. A case where $Y$ may be shielded from $X$ is when $Y$ is a longer-term phenotype that is a predictable result from *which* values $X$ attain at some point in equilibrium, without requiring to know *how* ($\sigma$) those measurements came to be. --- Rebuttal Comment 1.1: Comment: Thanks to the authors for their clarifications. I would like to hear the authors' opinions and explanations about the limitations I have mentioned for this paper such as the high number of interventional datasets requirement. --- Reply to Comment 1.1.1: Title: Number of datasets Comment: Thanks for the opportunity to follow-up on this (we ran out of space in the original box, and we thought it would be confusing to use a global response.) When the number of values per variable is particular high, we can parameterize a potential function as a smooth function of the values of each intervention variable. The identifiability results can show whether for a particular training grid of points we can identify (say) the product space over the combination of training intervention values, and uncertainty quantification methods can be used to indicate to which extent we have information to interpolate/smooth over test treatment levels points that lie in between the training levels. (To illustrate this with an analogy to the DAG case, imagine that we had each conditional distribution for a given random variable not as set of independent regressions - one for each combination of treatment parents values - but instead as a smooth mapping such as a Gaussian process with a real-valued encoding of the treatments as input) For a large number of variables, this is a generally hard problem overall regardless of method. However, the identification problem is with respect to a test set that does not need to span all possible combinations of interventions to be useful (e.g. what to focus on may be even lower-order interactions, such as in the pairwise DREAM analysis we use as illustration), with the junction tree approach showing a divide-and-conquer structure of subproblems which can be solved without providing solutions to larger joint set of variables. Many thanks again!
Summary: The paper introduce Interventional Factor Models (IFM), a graphical model that encodes assumptions about a data-generating process and the interventions that can be performed on top of it. The model explicitly includes intervention variables to impose a factorization over the distributions generated by the model under any interventional regime. Being a variable Y a function of the other observable variables X (or at least independent of the interventional regimes given X), the problem setting aims to learn E[Y] under a particular interventional regime of interest, given a subset of all possible interventional distributions and a corresponding IFM. For this task, the paper presents identification criteria and learning/estimation algorithms to find the parameter of interest. The authors present the result of experiments on semi-synthetic data to evaluate the performance of their approach. I have some questions related to the scope of the results in the paper. I leave them in the comments section and would like to hear from the authors. I'm willing to update my scores based on the response. Strengths: - The paper presents a novel graphical model that can be used to generalize known experimental settings to new unseen settings of interest. - Relevant setup, definitions, and results needed to understand the result are included. - The writing is clear enough to understand the setting and main contributions of the paper. - The paper presents sound results for the identification of causal effects and algorithms to estimate it, based on seen experimental settings and an IFM. Weaknesses: - The introduction and contributions claim that Thm. 3.2 is necessary, in general. However, the statement talks about PR-transformations in particular. It is not clear to me if it is proven that there are no other identification routes than PR-transformations. - It is mentioned in the paper that the IFM graph could be elicited from domain experts. This is something that sounds natural to me in the case of DAG, but not so intuitive in this case. I believe the reader could benefit from a more explicit example where variables have associated meaning and one could make sense of the assumptions. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: - When Thm. 3.2 says no, is it the case that there is no way to identify the causal effect of interest from the given model and distributions, even if it is not in the form of a PR-transformation? - If not, does it mean that Thm. 3.2. is necessary only under PR-transformations? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: I believe the discussion on limitations and societal impacts is sufficient. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you very much for the appreciation and excellent questions! We found the clarification questions about the generality of identifiability to be particularly useful for readers. **On Theorem 3.2:** Indeed we don't make any claims of completeness outside of the class of PR transformations. We do conjecture though that PR transformations include all "practical" ways of finding a mapping from distributions in $\Sigma_{train}$, which we can explain as follows. If we fix $x$ and think of each factor $f_k(x; \sigma)$ as a symbol indexed by $\sigma$ (as in the caption of Figure 3), then each unnormalized density in each regime is a monomial over those symbols. A symbolic mapping from the set of monomials encoded by $\Sigma_{train}$ to a target monomial of interest (i.e., a test set unnormalized density) boils down to ratios and products, as monomials are closed "only" under multiplication (we didn't elaborate that the exponents in the PR transformation will be integers, as this is not necessary to assume in order to solve it). We will clarify in the manuscript to which extent PR transformations are general enough and adjust the abstract accordingly. **IFM elicitation:** similar to stages in causal discovery algorithms that start from an undirected network (like the PC algorithm), we can construct an undirected network that starts fully connected, and repeatedly ask an independence oracle whether two variables $V_i$ and $V_j$ are conditionally independent given all other variables. Special attention should be paid about the meaning of "conditional independence" when speaking of two intervention variables, see e.g. Section 7 of Constantinou and Dawid, "Extended conditional independence and applications in causal inference" (Annals of Statistics, 2017). This is how Figure 1c comes to be if we query an independence constraint oracle that follows the independencies entailed by the DAG in Figure 1a. The cliques in this graph (ignoring the directionality) can conservatively be translated to factors in a factor graph, as in Figure 1b. The independence constraint oracle can be substituted by independence constraint/interaction tests (including the knowledge that we know that $\sigma_i$s are regime indicators and hence their sampling distribution is chosen by design). Despite relying on interventional data, faithfulness-like assumptions still play a role e.g. pairwise independence leading to joint independence. Moreover, as we observe only a limited number of values of $\Sigma$, assumptions about the lack of contextual independencies also play a role (e.g., if $X_i$ doesn't change in distribution when $\sigma_j$ changes from 0 to 1 while keeping all other $\sigma$ variables at 0, we may want to assume that this implies that $X_i$ and $\sigma_j$ should not be in any factor, regardless of the configuration of $\sigma$). Coming from a more domain-specific theory, such structures also emerge from models of equilibrium. Consider this example that can be found in [8], Section 3.1.1, where $f$ denote differential equations at equilibrium, $X$ denotes observed random variables, $U$ denotes (mutually independent) latent variables and $I$ denotes an intervention indicator: $$f_I: X_I - U_I = 0$$ $$f_D: U_1(X_I - X_O) = 0$$ $$f_P: U_2(gU_3X_D - X_P) = 0$$ $$f_O: U_4(U_5I_KX_P - X_O) = 0$$ After marginalizing the $U$ variables, what we get is an IFM (this is not the whole story, though, as other non-graphical constraints may take place given the particular equations. [8] discusses $U_I$ being marginally independent of $I_K$ on top of the above). In general, energy-based models are to be interpreted as conjunctions of soft constraints, the factor graph is one implication of a system of stochastic differential equations, and interventions denote change to particular constraints. A SDE model may have *other* assumptions on top of the factorization, and parameters which carry particular meaningful interpretations, but this fits well with our claims that the IFM is a "minimalist" family of models in terms of structural assumptions - a reasonably conservative direction to follow particularly when the dynamics of many natural phenomena cannot be (currently) measured at individual level, as it's the case of much of cell biology data. [33] has further elaborations on some of these ideas. --- Rebuttal Comment 1.1: Comment: Thanks for addressing my questions and comments. I still feel that the paper needs more concrete examples in terms of connecting with real-world systems. This is especially important when introducing a new family of graphical models that comes with a particular kind of implied assumptions. This gives the reader not only a better understanding of the model, but also a sense of how could those models be elicited in practice and the kind of systems they could be more useful. Together with the previous point, the concrete examples could help understanding the other limitation of the results, namely, needing an outcome Y that becomes independent of the treatment regimes given the other variables. The possible impact of the results critically depends on whether interesting systems fit this assumption, and if there are advantages in using IFM+results instead of other models. Having said that, I'm raising my score to a 6 after the authors response.
null
null
null
null
Hierarchical Bias-Driven Stratification for Interpretable Causal Effect Estimation
Reject
Summary: The paper addresses the overlap violation problem in observational datasets for causal inference by presenting an interpretable balancing method for overlap violation identification and causal effect estimation for binary treatments. The method BICauseTree adapts decision tree classifiers to the stated problem by recursively splitting the data population into non-overlap-violating subgroups based on covariate dissimilarity and treatment heterogeneity. The major advantage of the presented method in comparison to existing balancing methods is the interpretability of the prediction process. Strengths: • The authors evaluate their method on both synthetic and real-world benchmarking datasets. Weaknesses: - The proposed method is highly similar to the work in reference 12. Furthermore, related work is not discussed appropriately (section 2). It is thus unclear how this work significantly differs from previous contributions in the literature. The originality of the submission has to be considered very limited. - The manuscript presents a complete piece of work. Claims about the performance of the proposed method are supported by experimental results. Nevertheless, the an experimental study with sophisticated baseline methods is missing. A performance comparison with a Causal Forest model would be desirable. - Claims aiming at motivating the method are neither supported quantitatively nor experimentally (e.g., an analysis with different levels of overlap violation; unbiased estimation). - The submission lacks clarity due to multiple grammar errors and many nested arguments. The mathematical notation (section 3.1) lacks formal correctness. - The citation style does not agree with the required format for NeurIPS submissions. - The statement of a high ASMD indicating a confounder (line 164) needs to be justified. - The paper would profit from a revision of the language and the consistency of the presented arguments (e.g., lines 84, 181/182). Furthermore, the figures and sections need to be referenced correctly, as also recognized by the authors in the appendix. - The authors claim interpretability of the method but do not provide evidence for the statement. Here, a user study would be desirable etc. - The method is limited to ATE. Technical Quality: 1 poor Clarity: 1 poor Questions for Authors: - What is the major difference between the method BICauseTree to the existing method PositiviTree (reference 12)? The novelty of the method cannot be deducted from the manuscript. - How do the benefits of the presented method outweigh the negative impact of the information loss due to the "prediction abstention mechanism"? A quantitative assessment would be of interest. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 1 poor Presentation: 1 poor Contribution: 1 poor Limitations: - The authors have stated the limitations of their work. The section could be improved by highlighting the general weaknesses of single trees in prediction settings which naturally devolve to the proposed method. Flag For Ethics Review: ['No ethics review needed.'] Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their time, comments, and sharing his expertise. We respectfully disagree with their framing of our method as primarily “addresses the overlap violation problem”. Our method, first and foremost, estimates causal effects by stratification into sub-population with natural experiments. It has the useful consequence of being able to detect positivity violations and abstain from estimation in such cases, but it is secondary to estimating causal effects. - “The proposed method is highly similar to the work in reference 12”: We regret and apologize that we did not convey the differences more clearly. Our method first differs in essence: it is an effect estimation method while reference 12 is only a method for detecting overlap violations. While our method can also identify and characterize overlap violations, it is not its primary goal, but a useful consequence. Secondly, they differ in details: references 12 is off-the-shelf decision tree maximizing separability of treatment groups, while our method is a tree with a custom-made objective function to maximize balancing (lines 98-102). Therefore one may not use the leaves from reference 12 as natural experiments, claiming less residual confounding within each node. - Comparison to sophisticated baseline methods: The discussion (lines 366-369) acknowledges our method can have some residual confounding bias, that our method will often not have the best estimation accuracy, and thus we explore the tradeoff between interpretability and accuracy. Our experiments show we are often no better than a simple IPW. Therefore, we are most probably not better than any more advanced causal models. Our strength is in being more interpretable than them. We compared our method to a (single) Causal Tree as it is another decision tree-based model with custom objective function, enjoying the interpretability benefits decision trees provide. However, once we move into forests, the model becomes uninterpretable and as such, there’s no difference if we compare to it or to DragonNet, or any other advanced estimator for the reason described above. - Motivation is not supported: The motivation is indeed not supported quantitatively, as it is a desiderata for applications. We believe the claim that high-stake applications will prefer interpretable models is qualitative, not quantitative. - Grammar errors and formal correctness: We thank for the comment, we will make another effort to copy edit our manuscript. Section 3.1 follows established presentation of Rubin’s causal model [see for example landmark causal inference papers: 1, 2, 3, 4]. - Citation style: We apologize and will fix that. - “Line 164 confounder statement”: This was a bad phrasing on our side mixing up confounders with confounding bias. We already assume all confounders are observed, thus instead it should read: “The reason for choosing the confounder with the highest ASMD is that it is most likely to cause the most confounding bias, and therefore adjusting for it will likely minimize the residual confounding bias.“ We have added a supplementary figure showing this in practice, where we use a random feature split instead of the feature with the highest ASMD, and show that both in terms of estimation bias and in terms of confounding bias, using the highest ASMD feature as the splitting feature gives better performance (see plot in additional plots pdf of the rebuttal). - Revision of the language: Again, we will revise and copy-edit to the best of our abilities. Appendix figure references will be fixed the next time the Latex document renders. - A user study testing the interpretability of the method: This is true, but is out of scope. We take it as a fact that decision trees are more interpretable than other machine learning models. This has been previously established in the literature. See: [5, 6, 7] - Method limited to ATE: This is true. The ATE is often an estimand of interest. - Difference between reference 12: see point 1 above. - “The negative impact of the "prediction abstention mechanism"“: We don’t believe those exist. It is impossible to make valid data-based causal claims when there is no overlap. Therefore, where the model abstains is where there is inherent aleatoric uncertainty in the system. Our advantage over other models is the capability to precisely define, in the covariate space, those populations we can infer on and the ones we cannot. references: [1] Imai, Kosuke, and Marc Ratkovic. "Covariate balancing propensity score." Journal of the Royal Statistical Society Series B: Statistical Methodology 76.1 (2014): 243-263. [2] Abadie, Alberto, and Guido W. Imbens. "Bias-corrected matching estimators for average treatment effects." Journal of Business & Economic Statistics 29.1 (2011): 1-11. [3] Imbens, Guido W., and Jeffrey M. Wooldridge. "Recent developments in the econometrics of program evaluation." Journal of economic literature 47.1 (2009): 5-86. [4] Austin, Peter C. "An introduction to propensity score methods for reducing the effects of confounding in observational studies." Multivariate behavioral research 46.3 (2011): 399-424. [5] Silva, Andrew, et al. "Optimization methods for interpretable differentiable decision trees applied to reinforcement learning." International conference on artificial intelligence and statistics. PMLR, 2020. [6] Slack, Dylan, et al. "Assessing the local interpretability of machine learning models." arXiv preprint arXiv:1902.03501 (2019). [7] Allahyari, Hiva, and Niklas Lavesson. "User-oriented assessment of classification model understandability." 11th scandinavian conference on Artificial intelligence. IOS Press, 2011. --- Rebuttal Comment 1.1: Comment: The authors addressed and clarified the main questions and weaknesses. Nevertheless, the novelty of the work is still highly limited and the justification for statements supporting the method needs to be improved. Overall, I therefore raise the rating score by one point, but still consider the work insufficient for a major contribution to NeurIPS. I think my comments above will be very helpful for revising their paper. Given that the authors are only brief in their rebuttal on the differences with [12], I suggest that they also have more technical deep-dive in the appendix. As I stated above, it would be nice to see some insights in line with the motivation (e.g., plotting a tree as a case study).
Summary: This paper proposes a new method called BICauseTree for interpretable causal effect estimation. BICauseTree is a hierarchical bias-driven stratification method that identifies clusters where natural experiments occur locally. The method is designed to reduce treatment allocation bias and improve interpretability. The authors evaluate the performance of BICauseTree on several datasets and compare it to existing approaches. They find that BICauseTree performs well in terms of bias-interpretability tradeoff and outperforms existing methods in some cases. Overall, the paper presents a novel and promising approach to causal effect estimation that could have important applications in various fields. Strengths: 1. Novelty: The paper proposes a novel method called BICauseTree for estimating causal effects from observational data. The method is based on a hierarchical bias-driven stratification approach that identifies clusters where natural experiments occur locally. The method builds on decision trees to reduce treatment allocation bias and provides a covariate-based definition of the target population. The method is interpretable and outperforms other state-of-the-art methods in reducing treatment allocation bias while maintaining interpretability. 2. Significance: Causal effect estimation from observational data is an important analytical approach for data-driven policy-making. However, due to the inherent lack of ground truth in causal inference, accepting such recommendations requires transparency and explainability. The proposed method addresses this issue by providing an interpretable and unbiased method for causal effect estimation. The method has the potential to be applied in various domains, including healthcare, social sciences, and economics. 3. Experimental Evaluation: The paper provides a thorough experimental evaluation of the proposed method using synthetic and realistic datasets. The authors compare the performance of their method with other state-of-the-art methods and show that their method has lower bias and comparable variance. They also conduct sensitivity analyses to evaluate the robustness of their method to violations of the assumptions. The experimental evaluation provides strong evidence to support the claims made in the paper. Weaknesses: 1. Limited Scope: The paper focuses on a specific method for causal effect estimation from observational data, and the scope of the paper is relatively narrow (especially related to the tree-based models). While the proposed method is novel and has some advantages over other methods, it may not be of interest to a broad audience: (1) The method relies on the quality of the data and the assumptions made in the model. If the data is noisy or contains missing values, the method may produce biased estimates.; (2)The method may not be suitable for high-dimensional data, as the number of covariates may increase the complexity of the decision tree and lead to overfitting; (3) The method may not be suitable for datasets with small sample sizes, as the stratification may lead to small sample sizes in some subgroups, which may affect the accuracy of the estimates;(4)The method may not be suitable for datasets with complex interactions between the covariates, as the decision tree may not capture these interactions effectively. 2. Experimental Evaluation: While the paper provides an experimental evaluation of the proposed method, the evaluation is limited in scope and does not provide a comprehensive comparison with other state-of-the-art methods. The experimental evaluation would benefit from a more comprehensive comparison with other methods (such as TARNet from the machine learning domain) and a more detailed analysis of the results. Technical Quality: 3 good Clarity: 3 good Questions for Authors: In Figure 1, the IPW looks very close to the BiCausalTree. How can we apply post hoc explanations in such cases in general? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: See weakness. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their careful review, insights, and comments, and for the time spent reviewing our work. - Reliance on data quality and on model assumptions: We agree that data quality is a major factor on the success of the model in estimating effects, however, this is true for all models. If there are many missing values/not all confounding features are measured/other data limitations, the ability of any model to infer correct estimations is limited. In terms of the assumptions made, indeed, if the data does not contain natural experiments, the model will not be able to stratify the population in such a way that all confounding bias is removed. This is easily inspected by looking at the ASMD values in the leaves of tree, allowing the user to be mindful of the ability of the model to give accurate estimates, even in the lack of ground truth, as is the case in real world applications. - Dealing with high-dimensional data: High dimensional data indeed has the potential to lead to a complex tree in some situations. We note, however, that in theory we have less of a problem dealing with high dimensional data compared to other models, since the tree can ultimately pick a small subset of features to separate the population, if such subpopulations naturally exist in the data. In comparison to other methods (Matching, IPW, etc) where high dimensional data implies having to consider the entire span of features in the model, leading to complex models, here, in the presence of a relatively sparse subset of features that define the natural experiments in the population, the tree should identify these and use the small subset instead of the entire set of features. We have added a supplementary figure that supports this, where we demonstrate that in a synthetic experiment, increasing the number of features has little to no effect on performance (see graphs in ‘additional plots’ pdf of the rebuttal). As a contrary example, performing exact Matching on high dimensional data is almost impossible, and thus Matching in such cases resorts to reducing the number of feature to a single distance metric (i.e. propensity), leading to inaccurate matches. - The need for a large sample size for accurate estimation: As our model is tree-based, it requires a large sample size for accurate estimation, similarly to other tree-based models. This is indeed discussed in the text, in lines 386-388. Using decision trees allows it to inherit the advantages of trees, such as their interpretability, but also their limitations. - Dealing with complex interactions between the covariates: We use a tree-based model, which allows capturing all logical interactions between features. Indeed, some functional forms (e.g., diagonal lines) are more challenging for trees to capture, similar to how ill-specified models will struggle to capture some functional forms (e.g., linear models capturing non-linear forms). This is indeed discussed in the text, see lines 366-369. Here there is a trade-off between the complexity of the model used and by extension its ability to capture complex functional forms of the features, and the interpretability of the model. Very complex models might capture complex features, but are very hard to interpret and thus not very practical for sensitive domains where the experts/decision makers should be able to criticize and explain the decisions made by following an inference of a model. We assume, that the relevant features are measured, and given to the model as the input. A possible preliminary step that is always available to the benefit of the researcher is feature engineering to create new, complex features from the initial ones. In addition, importantly, the tree allows identifying leaves that didn’t reach balancing of the treatment groups (which could possibly stem from the lack of necessary complex features to split on), and add a more complex model to learn the propensities/effect. - Experimental Evaluation: We focus in this paper on ATE, whereas most state of the art methods focus on CATE, rendering the comparison with these methods irrelevant. In addition, we do not claim to outperform state of the art methods in terms of estimation bias, but rather propose a comparable method in performance (in most cases) that is also interpretable. We refer the reviewer to the discussion on the trade off between bias and interpretability in lines 297-310 and 366-370. - Applying post-hoc explanations to other models : As mentioned, the performance of IPW is very close to that of the BiCausalTree in most cases. It is possible to apply post hoc models for explaining non-interpretable methods such as IPW. However, this requires using models on top of models. One method to get such explanations is using the SHAP values [1], but other methods also exist [2]. We note, however, that these methods provide explanations and not an inherent interpretation of the model. Using explanation models, it is possible for the most explainable feature to end up having no actual effect on the model decision. references: [1] Syrgkanis, Vasilis, et al. "Causal inference and machine learning in practice with econml and causalml: Industrial use cases at microsoft, tripadvisor, uber." Proceedings of the 27th ACM SIGKDD conference on knowledge discovery & data mining. 2021. [2] Bodria, F., Giannotti, F., Guidotti, R., Naretto, F., Pedreschi, D., & Rinzivillo, S. (2023). Benchmarking and survey of explanation methods for black box models. Data Mining and Knowledge Discovery, 1-60. --- Rebuttal Comment 1.1: Title: thanks for the reply Comment: The authors addressed all my concerns and I would like to keep my positive score.
Summary: This paper focuses on achieving interpretable causal effect estimation, where the goal is to ensure that each decision within the algorithm is explicit and traceable. The authors propose a decision tree-based balancing method to address this problem, which identifies clusters where local natural experiments occur. The effectiveness of the proposed algorithm is empirically evaluated using synthetic and semi-synthetic data. The paper also did several ablation studies on the trade-off between interpretability and bias and the consistency of the decision tree. Strengths: Overall, this paper is well-executed and demonstrates several notable strengths. - This paper is very clear. It effectively presents complex ideas in an easily understandable manner. - The problem addressed in the paper is well-motivated and interesting. - The authors display a strong grasp of the related work in the field, effectively positioning their contribution within the existing literature. - The empirical analysis conducted in the paper is thorough and yields valuable insights. - The method is intuitive and well explained. Weaknesses: - Style file: One issue with this paper is that it doesn't follow the NeurIPS style guidelines, specifically regarding paragraph spacing. The paragraphs are not well-separated, which makes it harder to read and understand the content. This affects the overall flow and coherence of the paper. Additionally, the excessive content allowed due to the spacing issue may be seen as unfair to authors who followed the style guide correctly. This may be a potential ground for rejection. - Method: The rationale behind considering features with the highest ASMD as potential confounders is not well-explained. This is an important assumption in the paper, but it lacks a clear justification or empirical investigation. Providing additional explanations or conducting empirical studies would strengthen this aspect of the paper. - The experiment section of the paper is not self-contained. Although the motivating problem revolves around identifying subpopulations with natural experiments, this aspect is not adequately illustrated in the experiment section. Instead, the focus is primarily on bias analysis, with the analysis related to interpreting the causal effect estimation process deferred to the appendix. This undermines the fulfillment of the paper's fundamental promise to the readers. To address this issue, it is highly recommended that the authors integrate the analysis into the main paper, ensuring that the key components align with the paper's core premise. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Why does a feature with high ASMD more likely to be a confounder? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: n/a Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their helpful comments and suggestions, and for time spent reading our work. - paragraph spacing: We thank the reviewer for noticing, we have found the source for this error and fixed the spacing, leaving the main text untouched (by slightly resizing the figures). - Using the highest ASMD feature as the splitting feature: ASMD is a well established measure of balance between treatment groups in the field ([1], [2], [3]). The reason we choose the feature with the highest ASMD is that we assume (like most other methods in causal inference) that the measured covariates are indeed confounders. Therefore, the covariate with the highest ASMD is most likely to cause the most confounding bias, and therefore adjusting for it will likely minimize the residual confounding bias the most. We have added a supplementary figure showing this in practice, where we use a random feature split instead of the feature with the highest ASMD, and show that both in terms of estimation bias and in terms of feature imbalance, using the highest ASMD feature as the splitting feature gives better performance (see plot in additional plots pdf of the rebuttal). In addition, we will explain the motivation better and clarify that in the text, since apparently it is currently not sufficiently explained (line 164, please see point 4 in the general rebuttal). - Focus is on bias analysis instead of identifying subpopulations with natural experiments: As correctly pointed out by the reviewer, the main motivation of the method is identifying subpopulations with natural experiments. In such subpopulations, it is expected that the confounding bias between treated and control is minimized, leading to an unbiased estimate of the effect. Thus, in order to check whether the model was indeed able to identify natural experiments, a good proxy is its bias from ground truth effect (please also see point 2 in the general rebuttal). Indeed, we demonstrate in a synthetic dataset, where the natural experiment subpopulations are clearly defined, the ability of the tree to discover them (see figure 1a), and show how this can occur also on real-world datasets where we do not have the ground truth of the subpopulations that constitute natural experiments, by proxy of an unbiased effect estimation. We will revise the text to make this connection clearer. - Interpretation of the causal effect estimation process deferred to the appendix: We agree with the reviewer that a critical strength of the model is its interpretability, and is one of the motivations to develop it, and that it would be beneficiary to demonstrate that in the main text. However, due to space limitations, we could not add the tree structures and their explanations in the main text. We would like to emphasize that the ability to critique and evaluate the decisions made by the model, by experts and decision makers (e.g., criticizing the subpopulations created by it and which ones we can infer on and the ones we cannot due to positivity violations) is a unique feature of this model that is imperative in sensitive domains. In the absence of ground truth, as is the case in real world data applications, this is highly desired and important. We will add further discussion on this in the text as this is indeed a pivotal point of our model. references: [1] Austin, P. C. “Balance diagnostics for comparing the distribution of baseline covariates between treatment groups in propensity-score matched samples,” Statistics in medicine, vol. 28, no. 25,483pp. 3083–3107, 2009. [2] Austin, P. C. (2009). Using the standardized difference to compare the prevalence of a binary variable between two groups in observational research. Communications in statistics-simulation and computation, 38(6), 1228-1234. [3] Ali, M. S., Groenwold, R. H., Belitser, S. V., Pestman, W. R., Hoes, A. W., Roes, K. C., ... & Klungel, O. H. (2015). Reporting of covariate selection and balance assessment in propensity score analysis is suboptimal: a systematic review. Journal of clinical epidemiology, 68(2), 122-131. --- Rebuttal Comment 1.1: Title: Thank you for the response Comment: ``the covariate with the highest ASMD is most likely to cause the most confounding bias, and therefore adjusting for it will likely minimize the residual confounding bias the most.`` This claim is not obviously true to me. Is there a proof, empirical studies, or existing work that demonstrates that? --- Reply to Comment 1.1.1: Comment: We are not aware of existing work that demonstrates this, but it is well accepted in the field, and we believe this behavior can be easily simulated. The following code snippet shows that adjusting for the confounder with the larger imbalance (larger ASMD) results in lower estimation bias of the treatment effect, i.e., less confounding bias (since confounding is the only source of bias in this case). In contrast, adjusting for the confounder with the smaller ASMD results in far more biased treatment effect estimation, i.e., the residual confounding bias left is larger. ```python import numpy as np import pandas as pd import statsmodels.api as sm def generate_data(N=1000, seed=0): rng = np.random.default_rng(seed) a = rng.binomial(1, 0.5, size=N) X = rng.normal(0, 1, size=(N, 2)) X[a==1, 1] += 2 # x1 has larger discrepancy between treated and control units y = X[:, 0] + X[:, 1] + a X = pd.DataFrame(X, columns=["x_smallASMD", "x_bigASMD"]) a = pd.Series(a, name="a") y = pd.Series(y, name="y") return X, a, y def calculate_asmd(X, a): is_treated = a == 1 X1 = X.loc[is_treated] X0 = X.loc[~is_treated] smds = (X0.mean() - X1.mean()) / np.sqrt(X0.var() + X1.var()) asmds = smds.abs() return asmds X, a, y = generate_data() data = X.join(a).join(y) calculate_asmd(X, a) >>> x_smallASMD 0.031651 x_bigASMD 1.386414 # Adjusting for both covariates retrieves the true effect: print(sm.formula.ols("y ~ x_smallASMD + x_bigASMD + a", data=data).fit().params["a"]) # 1.00 # Adjusting for the more-biased covariate leads to a result somewhat closer to the true effect: print(sm.formula.ols("y ~ x_bigASMD + a", data=data).fit().params["a"]) # 0.92 # Adjusting for the less-biased covariate leads to high estimation bias print(sm.formula.ols("y ~ x_bigASMD + a", data=data).fit().params["a"]) # 2.98 ``` Additionally, this comment touches on an important point, further suggested by the above demonstration. Future work can combine the covariate-outcome associations with the ASMD in order to select the best candidate for splitting. For example, multiplying the ASMD of covariate _j_ with the absolute regression coefficient of (a standardized) covariate _j_, and selecting the covariate maximizing this combined value. we considered this approach, but decided against it in order to obtain an outcome-agnostic method.
Summary: The paper introduces a decision tree methodology to identify regions where selection bias no longer ensures covariate balance. These regions, which have some level of interpretability, can then be removed in subsequent analysis. Strengths: The paper presents an interesting decision tree methodology. The paper contains a significant amount of simulation experiments to validate the procedure. Weaknesses: Despite a focus on covariate balance, there is no guarantee of balance unlike competing methods (rerandomization, matching, etc). Furthermore, there is limited analysis to show the claims of balance are fulfilled, especially for high dimensions. The paper explores a bias-interpretability tradeoff, but provides no rigorous definition. The proposed model often is more biased than alternative models, most notably IPW, and it's not clear that the resulting decision trees, or their interpretations, are actually sensible. No discussion of estimator variance is given or how that might factor into a tradeoff, despite the high variance generally expected from decision tree estimators. Technical Quality: 1 poor Clarity: 2 fair Questions for Authors: In Section 2, it is briefly claimed that BICauseTree is better suite for ATE estimation instead of CATE, with no further discussion. This feels like a potentially important point and requires further justification. Trimming has the potential to bias treatment effect measurement. Are there any guarantees that trimming in your model ensures unbiased estimates? Are the decision trees in Figures A13 and A17 interpretable? While the decision tree can be followed, the trimmed regions seem to have limited clinical interpretation. More discussion around this feels necessary. In particular, interpretable does not seem to imply correctness for this method. ASMD is often minimized to establish balance. While splitting on variables that exhibit high ASMD should generally lead to some level of balance, is there any guarantee that the resulting regions will be optimally balanced in any sense? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 1 poor Presentation: 2 fair Contribution: 2 fair Limitations: The paper touches on some limitations, most notably the increases bias that is expected. The paper does not discuss the variance of the estimator in detail relative to other methods, which is another potential limitation. The method advocates for trimming nodes that violate positivity. These nodes could contain sensitive subpopulations and could lead to fairness concerns. Flag For Ethics Review: ['No ethics review needed.'] Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their time, helpful comments and insights. - Guarantee of balance: As pointed, our method cannot guarantee covariate balance. This is true for most practical methods; Matching, for instance, guarantees balance only if done by exact matching on the covariates, which is impractical in high dimensions and can result in a low remaining sample size. In practice, matching is more often done using some distance function (KNN, propensity score matching, etc.) where no guarantee of balance can be given. - Analysis of balance: We show some analysis of balance in appendix Figures A6, A12, A16, A21. Also, we argue that balancing is not the goal but a tool, while the real focus is unbiased estimation. - Balance in high dimension: As mentioned, we do not claim that the tree will result in balanced sub-populations in all cases, but in most cases where some natural experiments exist in the data. This is explicitly shown for low-dimensional synthetic data, but implicitly assumed by the performance on other datasets with more covariates. To demonstrate this point with more evidence, we added a supplementary figure showing the performance of BICause Tree in high dimensions, where as we increase the dimension, the tree maintains its performance both in terms of balancing and debiasing, when natural experiments exist (see plots in plots pdf of the rebuttal). The reason this is true is that the tree effectively performs on a sub-space of the features that are un-balanced, and not on the whole feature space. - Definition of bias-interpretability tradeoff: We discuss the trade off between interpretability and bias in lines 297-299 and show it in figure 4- the complexity of the tree compared to its bias. A similar trade-off exists for more complex models: they can achieve better performance in estimation bias but are less interpretable than our model. Our model’s interpretability is traded off against some estimation bias, due to its non-parametric nature. We agree that further discussion on this point would be useful and we'll add it to the paper. - Sensibility of BICause tree interpretation: We argue that the ability to tell if the estimation of a causal model is sensible is an inherent problem in causal inference due to the lack of ground truth. Our method, however, allows inspecting whether the decision is sensible or not by experts, because it is interpretable and transparent, whereas other methods that are not interpretable cannot be inspected as such. This is a key motivation for using our model instead of others- in the absence of ground truth, expert inspection of the algorithm decisions is the next best option, and this is what we allow. - Discussion of estimator variance: We note that estimator’s variance is shown relative to other estimators (Figures 1, 2, 3), and discussed (lines 272, 276, 295-6). As we are using a tree-based model, we indeed inherit all of the advantages as well as disadvantages of trees, e.g. high sample size required, and higher variance.Specifically, the trees’ variance is comparable with IPW, which is also known as high-variance estimator [1]. Regarding high dimensional data, our model should encounter less difficulties than other estimators as the tree ultimately picks a small subset of features to separate the population, if natural experiments exists. We added a supplementary figure showing this, namely that the performance of the model remain unaffected when increasing dimension (see plots pdf of the rebuttal). - BICauseTree is better suited for ATE estimation than CATE: We discuss CATE in the text to emphasize the difference from Causal Trees which are another custom-objective function tree-based model, that optimizes for effect heterogeneity, whilst our optimizes for exchangeability (lines 119-128). In fact, we made a mistake in the sentence phrasing and will clarify it (line 124) - BICause Tree might not necessarily be better suited for ATE than Causal Trees, but can only estimate ATE. However, ATE is often an estimand of interest, thus we believe our model is useful. - Effect of trimming on effect measurement: Trimming is a common method for positivity identification in causal inference. Instead of performing this step before applying the causal model (as one would do in IPW, for instance), it is performed as an inherent step in our model. We argue that this is imperative, since we cannot make data-based inferences on units without overlap. We stress that all comparisons to other models in the paper are done on the basis of the same trimmed population as in the BICause tree, to ensure a fair comparison (lines 207-210). - Interpretability of trees (Figures A13 and A17) and their correctness: Ideally, the model can give interpretable regions. This of course depends on the data. The trees in A13 and A17 are indeed interpretable and this allows experts to criticize/evaluate the correctness of the method, which cannot be done in methods that are not interpretable by design. In general, interpretable does not imply correctness in any model. It does allow criticizing the decisions made by the model, which is impossible when using other, non-interpretable models. - Guarantee that the resulting regions will be optimally balanced: Unfortunately, we could not establish an analytical guarantee (note that most practical methods do not have such guarantees). We do demonstrate that when natural experiments exist in the data, the model can find them and lead to good balancing. - Fairness concerns due to the trimmed nodes: As explained above, trimming is a mandatory step in causal inference as it is impossible to make valid causal claims when there is no overlap. It is often performed anyway using opaque models. Our advantage over other models is the capability to precisely define, in the covariate space, those populations we can or cannot infer on. [1] Khan et al. "Adaptive normalization for IPW estimation" Journal of Causal Inference (2023) --- Rebuttal Comment 1.1: Comment: Thank you for your detailed reply. Most of my concerns have been addressed to some degree, although I still have significant concerns: - I acknowledge that balance is a secondary concern to the primary goal of unbiassdness. That said, BICauseTree is explicitly referred to as an "interpretable balancing method". The lack of any type of guarantee, for balance or unbiassdness in general, along with unconvincing empirical results, remains a significant concern for me. - The lack of a rigorous definition of interpretability, or the bias-interpretability tradeoff, makes any claims of interpretability difficult to evaluate. Our difference of opinion in the interpretability of Figures A13 and A17 further reinforces this. - Furthermore, I still do not fully understand how the improved interpretability of excluded observations prior to ATE estimation will significantly aid a practitioner. Is it even an ATE estimator at that point? Is your focus on situations where a CATE estimator is impractical? I believe further discussion of the practical implications of BICauseTree is necessary. Ultimately, after reviewing your response, and the feedback of my fellow reviewers, I will not be changing my initial score at this time. --- Reply to Comment 1.1.1: Comment: Thank you for your additional consideration and the points brought up. Please see our point-by-point response: * We understand the reviewer's concern about the lack of analytical guarantees and are only left with restating the fact that neither do other well-known modern methods [1 (a NeurIPS paper too), 2] that show their utility using simulations alone. If the reviewer can suggest specific experiments they would like to see to further convince them, then we will be happy to conduct them. * We operate within the page-limit constraints, and we acknowledge we might have been terse on some definitions we thought to be either well-established, intuitive, or can be deferred to external resources. For example, we describe interpretability based on Cynthia Rudin's landmark paper [3] and further refer to it. In her paper, she claims that _"Interpretability is a domain-specific notion so there cannot be an all-purpose definition"_ and that it is basically a useful constraint that _"obeys structural knowledge of the domain"_. We further believe that the fact we can discuss whether the _content_ of the explanation makes sense is a big step forward as this discussion is not even possible to begin with under any other black-box model. * The positivity assumption is one of the three assumptions required in order to make valid data-driven causal claims (in addition to causal consistency and exchangeability). However, in real-world data, not all units are always comparable to begin with, as they may have no counterparts in the other group to allow extrapolation of the outcome across groups. It is a very common in practice to discard these units [4, 5]. However this indeed changes the actual eligibility criteria and therefore to whom we believe the results will generalize to in the population. This is the reason why it is of interest to know on whom exactly the causal claims are made. The overlapping region is where a researcher believes their results will transport from the sample to the population (i.e., for whom the results are externally valid) [6]. When we discuss transportability, the "ATE" is indeed not well-defined, and need to be separated to the sample ATE (SATE) and the population ATE (PATE) [7, 8]. Lastly, the CATE is not a concern in this study. We again stress the ATE is a valid estimand on its own. For example, it has both logistical and philosophical justification in the field of public health policy. [1] Shi, Claudia, David Blei, and Victor Veitch. "Adapting neural networks for the estimation of treatment effects." Advances in neural information processing systems 32 (2019). [2] Hill, Jennifer L. "Bayesian nonparametric modeling for causal inference." Journal of Computational and Graphical Statistics 20.1 (2011): 217-240. [3] Rudin, Cynthia. "Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead." Nature machine intelligence 1.5 (2019): 206-215. [4] Potter, Frank J. "The effect of weight trimming on nonlinear survey estimates." Proceedings of the American Statistical Association, Section on Survey Research Methods. Vol. 758763. Washington, DC: American Statistical Association, 1993. [5] Cole, Stephen R., and Miguel A. Hernán. "Constructing inverse probability weights for marginal structural models." American journal of epidemiology 168.6 (2008): 656-664. [6] Oberst, Michael, et al. "Characterization of overlap in observational studies." International Conference on Artificial Intelligence and Statistics. PMLR, 2020. [7] Degtiar, Irina, and Sherri Rose. "A review of generalizability and transportability." Annual Review of Statistics and Its Application 10 (2023): 501-524. [8] Imai, Kosuke, Gary King, and Elizabeth A. Stuart. "Misunderstandings between experimentalists and observationalists about causal inference." Journal of the Royal Statistical Society Series A: Statistics in Society 171.2 (2008): 481-502.
Rebuttal 1: Rebuttal: We thank the reviewers for their time, helpful comments and for sharing their expertise. We present in our work a model for effect estimation in observational data that is inherently interpretable, scalable, and has the useful consequence of abstaining from inference on subgroups where inference would be unreliable (i.e., with positivity violations). Here, we address the points raised by multiple reviewers and present the changes made following these comments: 1. Limitations and strengths of decision trees: Our method builds on top of decision trees for their inherent interpretability. However, as noted by the reviewers, by doing so we also inherit their disadvantages. Namely, their requirement for a larger sample size to estimate densities non-parametrically (and therefore their higher variance relative to parametric models), and their inappropriateness for certain functional forms (e.g., estimating straight lines inefficiently). Nevertheless, these limitations can be partially overcome by fitting estimators at leaf nodes (similar to how one may convert a piecewise constant function to a piecewise linear one). We note that with the limitations we also inherit the strengths of trees: their interpretability, their ability to deal with high dimensional data by looking at a relevant sub-space of the features, and their ability to account for complex interactions between features. 2. Several reviewers questioned why we focus on reduction of estimation bias rather than improvement in balancing. We note that although our method is guided by improvement in balancing, our ultimate goal (like any causal inference method) is the consequent reduction in estimation bias. Therefore, estimation bias is the focus metric of our results as it provides stronger evidence for the usefulness of our method. We did, of course, evaluate balancing as well, but due to space limitations, this part is currently in the appendix. 3. Two of the reviewers commented on the correctness, or sensibility, of our model. The interpretability of our model was a key motivation for developing it and we believe that it is a key strength of it. Important to note, however, that interpretability does not imply sensibility or correctness. It does, however, provide the possibility for decision makers and domain experts to criticize the inferences made by the model and their correctness - something that is not available with black-box models even with explainability methods. This is even more important in causal inference, where there is no inherent ground truth to validate the results of the model. Thus, in summary, our model allows subjecting the algorithm decisions to such inspection, whereas other, non-interpretable models, do not offer this important capability. 4. Some of the reviewers questioned why we focus on the largest ASMD. The idea behind our method is to gradually reduce bias by stratifying on the most imbalanced feature. The intuition is that the most imbalanced confounder causes the most confounding bias. ASMD is a common method to evaluate imbalance in applied causal inference, but our motivation above was not well phrased in the manuscript, leading to some unclarity regarding our choice to use it. We have made clarifications in the text to explain the motivation for using ASMD. Furthermore, to support our algorithmic choice, we implemented a variant of our method that chooses features to split on randomly, rather than taking the one with highest ASMD, and provide the figures to show that it underperforms both in terms of estimation bias and overall balance. This analysis will be added to the appendix. (see last item regarding new experiments for more details). 5. The rebuttal pdf includes two new results. One, described above, shows that selecting to split on the feature with highest ASMD at each recursion step leads to better results both in terms of estimation bias and in terms of balancing, and is therefore justified. The second result strengthens our claim of scalability. We claim our methods is suited for high-dimensional data, but only present somewhat implicit evidence. We now show that our method rediscovers existing natural experiments even when increasing the dimensionality of the data by two orders of magnitude. Pdf: /pdf/9d3b1a231181dc98c9394e99f8403d05d78ab053.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
OTOv3: Towards Automatic Sub-Network Search Within General Super Deep Neural Networks
Reject
Summary: OTOv3 is an automated system that trains general super-networks without pretraining or fine-tuning, constructs search spaces automatically, and produces high-performing sub-networks. Experimental results show competitive or superior performance compared to state-of-the-art methods across different benchmark datasets and architectures. Strengths: Strengths are shown below: 1. OTOv3 with codebase sounds like a systematic work, which may have some actual impacts. 2. Exquisite drawing is appealing. Weaknesses: However, some weaknesses make me think that I am not fully agreeing with this article now. 1. Although I have read the comparison in the appendix, I find that the work lacks novelty when compared to OTOv2. It appears that some key modules in OTOv3 are simply improved versions of those in OTOv2. For example, the comparison between GeZIGs and ZIG, Dependency Graph Construction in OTOv2 and H2SPG versus DHSPG. As a result, OTOv3 does not seem to introduce any essential innovations but rather represents a slight technological improvement over OTOv2. 2. Some of the comparisons made in the experiments are unfair. In Table 2, the search cost is not fairly assessed since the author's value was obtained using a new GPU (A100), while others were conducted using an older version GPU. Additionally, the authors fail to consider the training time of the super-Net when comparing it with Training-free methods. In Table 3, some of the baselines used are outdated, not the most recent ones. The authors mainly compare papers published before 2020, with only one paper from 2022. A more appropriate reference would be OTOv2. 3. The methods sections are poorly organized. Even after reading the Method section multiple times, I am still confused about how this method actually works. The authors often introduce concepts without providing a sufficient explanation in the appropriate positions, resulting in explanations being scattered throughout the text, which is highly confusing. It seems as though the authors assume the readers are already familiar with these concepts. For instance, "$x^*_{H2SPG}$" is first mentioned in the introduction but not adequately explained. 4. The writing itself needs further improvement. For example, the sentence in line 39 ends abruptly, and it is unclear whether the subsequent sentences in line 40 should be read as a continuation. In the introduction, the authors use the phrase "perhaps the first" to describe their method. If the authors are confident that it is indeed the "first," they should remove the word "perhaps" or refrain from using "the first" altogether. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: The authors could refer to the Weaknesses. Additionally, I also have some open questions to discuss. 1. It seems that the structures searched by the OTOv3 are complicated. So could the authors provide a comparison of inference time in Table 3? 2. Could the authors reveal the insights behind the methods with few words? Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer bA4X, We appreciate your constructive comments and valued suggestions. Please see our responses as follows. Look forward to further discussion. - **Lacks novelty compared with OTOv2**. > Thank you for the question. Please refer to our general response above along with a PDF to illustrate the difference between OTOv3 and OTOv2. We hope that could adequately address your concern. - **Search cost comparison with zero-shot methods are unfair.** > Thank you for the insightful question. We provide one general response above where we hope could adequately tackle your concern. - **Compare with more recent works. And why not compare with OTOv2?** > Thanks for the great suggestion. We have included more recent works into the preliminary comparison. Considering that OTOv3 and OTOv2 focus on divergent target problems, we have elected not to include OTOv2 in our comparative experiments. >| Method | Search Space | Acc1 (%) | Params | GPU | Search Cost (GPU days) | >|--|--|--|--|--|--| >| Zen-Score-2M (2021) | ResNet Pool | 97.5 |2M | A100 | 0.5 | >| SANAS-DARTS (2022) | DARTS | 97.5 | 3.2M | 1080Ti | 4.6 | >| **ZARTS (2022)** | DARTS | 97.5 | 3.5M | 2080Ti | 1.0 | >| **ADARTS (2022)** | DARTS | 97.5 | 2.9M | 3090Ti | 0.2| >|**EPC-DARTS (2023)**| DARTS | 97.6 | 3.2M | 3090Ti | 0.2 | >| OTOv3 | SuperResNet | 97.5 | 2M | A100 | 0.1 | > [ZARTs] ZARTS: On Zero-order Optimization for Neural Architecture Search, NeurIPS, 2022. > > [ADARTS] Partial Connection Based on Channel Attention for Differentiable Neural Architecture Search, IEEE TII, 2022. > > [EPC-DARTS] EPC-DARTS: Efficient partial channel connection for differentiable architecture search, NN, 2023. - **The DARTs sub-network searched by OTOv3 seems more complicated than others. Could the authors provide a comparison of inference time?** > This is a great observation. Please see our below responses. > **Why OTOv3's searched sub-DARTS looked more complicated than others?** > The current OTOv3 could automatically construct search space given general DNNs, while requires searchable structures containing trainable variables. Consequently, operators without trainable variables such as skip connections are preserved. In contrast, other methods could utilize pre-specified architecture variables to search over those operators. Meanwhile, the automated introduction of such auxiliary architecture variables over automated constructed search space is an open problem and not achieved yet by OTOv3. Therefore, the resulted sub-network by OTOv3 on DARTS typically looked more complicated than other methods. > **Inference time comparison.** We provide the inference time comparison on an Intel(R) Core(TM) i5 CPU @ 2.60GHz. We agree that the DARTS sub-network searched by OTOv3, though has competitive FLOPs, proceed slower than others due to preservation of untrainable structures. > For fairness, we manipulate the DARTS super-network by adding extra trainable coefficients onto the untrainable structures to enable searching over them. The preliminary result showed that in this setting, sub-network constructed by OTOv3 could reach competitive inference time to other methods. >| Method | Search over untrainable structures | Acc1 (%) | Params (M) | FLOPs (M) | Inference Time (CPU) | >|--|--|--|--|--|--| >| DARTS | Yes | 73.3 | 4.7 | 574 | 8.6ms | >| ISTA-NAS | Yes |76.0 | 5.7 | 638 | 9.1ms | > | PC-DARTs| Yes | 75.8 | 5.3 | 597 | 9.8ms | > | P-DARTS | Yes | 75.6 | 4.9 | 557 | 8.0ms | > | ProxylessNAS | Yes | 75.1 | 7.1 | 465 | 6.2ms | >| **OTOv3** | **No** | 75.3 | 4.8 | 547 | **24.2ms** | >| **OTOv3** | **Yes** | 75.5 | 5.2 | 563 | **8.4ms** | > > *(Tested in ONNX format to optimize the inference efficiency on CPU with an input tensor as (1, 3, 224, 224).)* > Finally, besides the above, given an automatically constructed search space by OTOv3, automatically introducing auxiliary variables or defining some training-free mechanisms would be the most appropriate solutions in our mind to enable automatic search over the structures without trainable variables. We will leave them as future work for further study. - **Improve the writing. Reveal the insights behind the methods with few words.** > Thanks for the great suggestion. We will add more descriptive languages and illustrations into the revision to improve the writing. > In a few words, given a general DNN, OTOv3 at first automatically analyzes the dependency and constructs its search space, then employs H2SPG to identify redundant structures and train the remaining important ones, finally automatically constructs a high-performing sub-network upon the solution of H2SPG. > The below is our explanations with greater details. > > **Automated search space construction.** Given a general DNN, OTOv3 at first analyzes the dependency across varying operators and creates one dependency graph. The dependency graph is then used to figure out which groups of operators are removal so that after removing the operators, the remaining DNN is still valid and functions normally. In particular, OTOv3 considers a class of such removal structures, i.e., the generalized zero-invariant groups (GeZIGs). Therefore, the set of GeZIGs forms the search space of the given DNN. > > **H2SPG to identify redundant structures.** The next is to find out among GeZIGs, which are redundant. The underlying optimization problem is a hierarchical structured sparsity optimization problem. We then proposed H2SPG to effectively find-out redundant GeZIGs and train the important GeZIGs to high-performance. Remark here that one redundant GeZIG refers to one redundant removal structure in the given DNN. > > **Automated sub-network construction.** In the end, upon the solution of H2SPG, we automatically construct a high-performing sub-network by removing the structures corresponding to the redundant GeZIGs. Such created sub-network does not require further fine-tuning. Sincerely, Paper 5388 Authors --- Rebuttal Comment 1.1: Comment: After carefully reading the explanation of the authors, I still hold the view that there is no essential innovation over OTOv2. Thanks for the authors' work. --- Reply to Comment 1.1.1: Title: Thanks for your responses. Look forward to further discussion. Comment: Dear Reviewer bA4X, Thanks for your responses and efforts. We gently remark here that the counterparts of OTOv3 should be the existing NAS works rather than OTOv2, due to their orthogonal target problems. Comparing with the existing NAS works, OTOv3 has made significant essential innovations, especially given that the primary goal of OTOv3 has not been achieved by existing solutions to the best of our knowledge. We would like to further kindly highlight that OTOv3 is neither a simple technical improvement nor a direct application onto NAS tasks upon OTOv2, since there exist systematic differences of both frameworks spanning from target problems, algorithmic designs, to engineering developments. As outlined below, the technical contributions in OTOv3 are of significant importance to our understanding. | | OTOv3 | OTOv2| |--|--|--| | **Dependency Graph** | Designed for Automated Sub-network Search | Designed for Automated Structured Pruning | | **Sparse Optimizer** | Consider DNN hierarchy | No hierarchy | | **Produced Network** | Dramatically change topology of computational graph | Preserve topology yet slim operators | | **Engineering Development** | The code base has thousands-of-lines difference. | -- | We would appreciate if the reviewer could kindly reconsider the novelty assessment of our work. Meanwhile, we look forward to discussing other questions if any. Sincerely, Paper 5388 Authors
Summary: This paper proposes OTOv3, an approach to train general supernets and discover promising subnetworks. It claims to be able to automatically generate the search space, and construct subnetworks based on hierarchical half-space projected gradient. The proposed approach has been evaluated on a number of datasets, showing comparable performance to the state of the art. Strengths: + The proposed approach has been evaluated on a number of popular datasets. + The objectives make sense, and the overall idea of automatically building a search space and then search for strong performing models is interesting. Weaknesses: - It seems not clear how this 3rd version of OTO advances the previous versions. It seems the delta in this paper is quite minor. - There are many places throughout the paper, are either not clear or perhaps not correct. For instance, the very first sentence in the abstract:`Existing neural architecture search (NAS) methods typically rely on pre-specified super deep neural networks (super-networks)...` I think in NAS context this can be very misleading. Also the dependency graph construction seems to be quite standard in graph analysis. - Some of the experimental results reported are not very clear. How the proposed approach could be 4-5x faster than training-free NAS, if compared in a fair way? Does the 0.1 GPU day search cost include everything you need to do to get the final architecture? - Missing comparison with existing work such as TE-NAS, ZiCo etc. Technical Quality: 1 poor Clarity: 1 poor Questions for Authors: See above. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 1 poor Presentation: 1 poor Contribution: 2 fair Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer h4H8, We appreciate your constructive comments and valued suggestions and responded in details below. Look forward to further discussion. - **The delta between OTOv3 and OTOv2 are minor**. > Thank for the question. Please refer to our global rebuttal response along with a PDF which illustrate the difference between OTOv3 and OTOv2. We hope that we adequately addressed your concern. - **There are many places throughout the paper, are either not clear or perhaps not correct. For example, Existing neural architecture search (NAS) methods typically rely on pre-specified super deep neural networks (super-networks) with handcrafted search-space beforehand.** > Thanks for the valued suggestion. To the best of our understanding, in order to search an optimal sub-network given a general DNN, existing Neural Architecture Search (NAS) methods typically necessitate the initial handcrafting of a search space. This typically involves significant human intervention. To overcome these challenges, we present a novel algorithmic framework accompanied by an end-to-end automated system called OTOv3. Designed for general DNNs, this system has three main capabilities: (i) automatic construction of the search space, (ii) identification of removal redundant structures while maintaining high-performance training, and (iii) automatic generation of sub-networks in a one-shot manner. >To better deliver the context, we will rephrase the first sentence as **"To search optimal sub-networks given general DNNs, existing neural architecture search methods typically rely on handcrafting the search spaces beforehand, which usually requires significant human intervention"**. Meanwhile, we are aware that there exist other NAS methods that construct or grow network candidates given a pre-specified search pool and policy. This setting is different from ours which starts from a general super DNN and ends in an optimal sub-network. We will carefully revisit the manuscript and polish the writing to make the description clearer and more precise. - **How the proposed approach could be 4-5x faster than training-free NAS?** > Thanks for the insightful question. Please refer to our global rebuttal response above that hopefully could adequately address your concern. Furthermore, the search cost of OTOv3 takes into account the time spent from the construction of the dependency graph to the identification of redundant structures, once the architecture of the sub-network has been determined. - **Missing comparison with existing work such as TE-NAS, ZiCo etc.** > Thanks for providing the great literatures. We have provided preliminary comparison with TE-NAS in the global rebuttal response and will include both into the revision. Sincerely, Paper 5388 Authors --- Rebuttal Comment 1.1: Title: Thanks for the response Comment: Thanks for the response and additional results. I also checked the comments and discussion from the other reviewers and I still feel that the delta over OTOv2 is limited. Having that said based on the authors response I would increase my score to 4. --- Reply to Comment 1.1.1: Title: Thank you Comment: Dear Reviewer h4H8, We greatly appreciate your feedback and are pleased to address most of your concerns. We are also grateful for the improved score. While we respect your perspective on the delta to OTOv2, we believe in the high practical and technical innovations of OTOv3. Meanwhile, we are optimistic about its potential benefits to the AI community. To ensure clarity, we will enhance our manuscript to more effectively highlight the differences between the two approaches, especially in the context of orthogonal target problems, in our revision. We will remain available to address any further questions or concerns that arise. Thank you for the valued efforts! Sincerely, Paper 5388 Authors
Summary: The authors process a training method to efficiently and automatically find optimal subnetworks without the need to configure the search space or a pre-specified supernetwork. Strengths: - A clever way to build the search space without manual intervention using zero-invariant group partition - Avoiding the need to choose the supernetwork beforehand - Automatically finding the optimal subnetwork - Good results to show the validity of the method Weaknesses: - While OTOv3 is able to outperform prior art on smaller datasets like FashionMNIST, CIFAR, SVHN, it doesn't beat them in a larger dataset like ImageNet (It's still competitive). Examples: P-DARTS, AmoebaNet-C, PC-DARTS - similar FLOPS/PARAMs but better than OTOv3 - In practice, accuracy (or relevant metric) is one of the improvement metrics to optimize against. Therefore, would it not be better to have a suite of subnetworks that trade-off accuracy/flops vs just being given one automatically? Technical Quality: 3 good Clarity: 2 fair Questions for Authors: - Can this approach be extended to transformer-based architecture search? - How can this approach be modified to give a set of subnetworks which can cater to different user needs versus just one subnetwork? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: - Limitations of the approach have not been adequately discussed Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer csCP, We appreciate your insightful comments and constructive suggestions. Please see our responses to the comments. Look forward to further discussion. - **Can this approach be extended to transformer-based architecture search?** > That is a great question. Certainly, OTOv3 can be extended to transformer-based architecture search. In fact, we are actively employing OTO onto Large Language Models (LLMs) and have recorded promising experimental progress. > With greater details, the majority of operators in the transformer-based architecture are naturally supported, e.g., the MLP layers. The multi-head attention layers require further considerations to form them as an entirety since they are composed by assembling multiple basic operators in the DNN trace graph. > Furthermore, OTO framework is also flexible to be integrated with other cutting-edge techniques that are widely used to fine-tune large-scale transformers such as [LoRA]. Transformers with LoRA will affect the presence of dependency graph to introduce *overlapping* node groups, which are currently disjoint in the dependency graphs of the current libraries (see the attached rebuttal PDF). We will detail these applications in our forthcoming work. > [LoRA] LoRA: Low-Rank Adaptation of Large Language Models, ICLR 2021. - **How can this approach be modified to give a set of subnetworks which can cater to different user needs versus just one subnetwork?** > That is a great question. The framework is flexible to produce multiple sub-networks upon varying criteria for meeting the user needs. In particular, after automatically constructing the search space given general DNNs, we could modify the redundancy score calculation or integrate with other proxies in accordance with the specific user demands. - **Some prior works achieve higher accuracy on ImageNet than OTOv3.** > Thanks for the great question. We have perspective that their outperformance is mainly driven by the usage of multi-level optimization that trains additional pre-specified auxiliary architecture variables to avoid overfitting over training dataset. For the sake of autonomy and generality, the current OTOv3 has though automatically created search space given general DNNs, not automatically introduced architecture variables yet, (which is another open problem). Then we formulated a single-level hierarchical structured-sparsity problem upon the automated search space. > That being said, OTO framework is flexible to be integrated with multi-level optimization if equips with an automatic introduction of architecture variables. Such an enhancement could potentially lead to further accuracy improvements and is left as a future work upon the current library. - **Limitations have not been adequately discussed.** > Thanks for the great question. Besides the limitations described in Appendix A. 1, we will discuss them more adequately and provide potential solutions in the revision. The main limitations are largely from an engineering perspective, including unsupported operators, the requirement for removal structures to contain trainable variables, and potential overfitting to the training dataset. The later two issues could be largely resolved if we could realize automated introduction of architecture variables to enable multi-level optimization in the future. Sincerely, Paper 5388 Authors. --- Rebuttal Comment 1.1: Title: Response to Rebuttal Comment: Thank you authors for the response to my questions. Based on this, I still stand by my original appraisal. --- Reply to Comment 1.1.1: Comment: We thank the reviewer for standing by the original appraisal.
Summary: The paper "OTOv3: Towards Automatic Sub-Network Search Within General Super Deep Neural Networks" presents a new automated system called Only-Train-Once (OTOv3) for Neural Architecture Search (NAS). Unlike existing NAS methods that often depend on pre-specified super deep neural networks with handcrafted search spaces, OTOv3 can train general super-networks and generate high-performing sub-networks in a one-shot manner without pre-training and fine-tuning. The authors outline three main contributions of OTOv3: automatic search space construction for general super-networks; a Hierarchical Half-Space Projected Gradient (H2SPG) for ensuring network validity during optimization; and automatic sub-network construction based on the super-network and the H2SPG solution. The effectiveness of OTOv3 is demonstrated on a variety of super-networks and benchmark datasets, with the computed sub-networks achieving competitive or superior performance. Strengths: 1. The proposed method is backed by substantial theoretical considerations and empirical validation, demonstrating the quality of the research. 2. The paper is well-structured and clearly written, making the proposed method and its benefits understandable. 3. OTOv3 can be applied to a wide range of super-networks and has shown competitive or superior performance on several benchmark datasets, indicating its potential impact in the field of NAS. Weaknesses: 1. The term search space and supernet are not well defined in the paper. To my understanding, supernet is a kind of representation of search space. In the paper, the author says that "OTOv3 automatically generates a search space given a general super-network", which is confusing. 2. The authors have not discussed the potential limitations of OTOv3, such as its possible limitations in search space, or potential for overfitting. These factors could impact its practical applicability. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: See weakness. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 3 good Contribution: 4 excellent Limitations: See weakness. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer XuwW, We appreciate your valued comments and favorable recommendations for our work. Please see our responses to the constructive suggestions. Look forward to further discussion. - **The term search space and supernet are not well defined in the paper. To my understanding, supernet is a kind of representation of search space.** > Thanks for the great suggestion. We will define the terminologies better in the revision and unify *super-network* as *general DNN* to avoid ambiguity. - **The authors have not discussed the potential limitations of OTOv3, such as its possible limitations in search space, or potential for overfitting. These factors could impact its practical applicability.** > Thanks for the great suggestion. You are right that OTOv3 has limitations in terms of search space and potential overfitting. As detailed in Appendix A.1, the primary constraints of the existing library stem largely from an engineering perspective. Issues such as unsupported operators, the requirement for removal structures to contain trainable variables, and no automated architecture variable introduction may arise. The last one further forced us to formulate a single-level hierarchical structured sparsity optimization problem, which might overfit the training dataset. > Despite these limitations, we are optimistic and believe that the library would become more and more mature by harnessing contributions from both our end and the wider open-source community, building upon its current state. Sincerely, Paper 5388 Authors
Rebuttal 1: Rebuttal: Dear reviewers and ACs, We deeply appreciate all the insightful comments and constructive suggestions that helped us improve our manuscript. We have carefully addressed each comment and will include into the revision. Below, we present our responses to the general questions regarding the difference and novelty of OTOv3 compared with OTOv2 and the search cost against the training-free NAS. - **Target problems of OTOv3 and OTOv2 are orthogonal.** > We would like to kindly emphasize that OTOv3 and OTOv2 address two **distinct and orthogonal** challenges within the field of autoML. We provide the example networks outlined in the attached rebuttal PDF along with further elucidation on this matter below. > **OTOv2: automated structured pruning for general DNNs.** OTOv2 studied given general DNNs, how to automatically construct a slimmer pruned network. The process of structural pruning eliminates redundancy within each operator while still **maintaining the presence of these operators and the connections between them**. > **OTOv3: automated sub-network search for general DNNs.** OTOv3 further studies given *general* DNNs, how to *automatically* identify and remove redundant operators entirely to construct a high-performing sub-network. Remark here that in OTOv3, **the operators and connections can be completely removed**. This is a stark contrast to the structured pruning in OTOv2, where they are preserved. - **What is the novelty compared with OTOv2?** > The target problem of OTOv3 is orthogonal to that of OTOv2 and not achieved yet by the existing works to best of our knowledge. We establish a fresh algorithmic framework and develop an end-to-end system from scratch with **three main novel contributions.** These contributions are fundamentally different from those of OTOv2, marking a significant shift in our approach. > **Automated search space construction via dependency graph analysis.** Initially, OTOv3 automatically constructs dependency graphs for general DNNs. These graphs are used to identify 'removal structures'—structures that can be eliminated without disrupting the normal function of the remaining DNNs. Remark here that OTOv3 and OTOv2 establish distinctly different dependency graph algorithms to cater to their orthogonal objectives. The design, creation, and resulting graphs differ significantly. Please see the rebuttal PDF for more details. Afterwards, OTOv3 constructs a search space based on these removal structures within the dependency graph. > **H2SPG versus DHSPG.** Subsequently, OTOv3 introduces a novel hierarchical half-space projected gradient method (H2SPG) to identify redundant removal structures within the search space. In contrast to the DHSPG in OTOv2, H2SPG further incorporates the DNN hierarchy when generating sparsity, ensuring that the remaining DNN remains valid. In contrast, the ablation study in Appendix D shows that DHSPG can easily result in invalid sub-networks. > **Automated subnetwork construction.** Finally, OTOv3 automatically constructs sub-networks by removing redundant structures identified by H2SPG. From the engineering development perspective, OTOv3 presents a greater challenge than OTOv2 because it more aggressively modifies the computational graphs of the DNNs. > These novel components work together to achieve our goal: given a general DNN, automatically train it from scratch and produce a high-performing sub-network in the one-shot manner. - **Search cost against training-free NAS** > **Difference of search space.** We appreciate the reviewers for the insightful question. We would like to clarify that OTOv3 and [ZenNAS] performed over similar yet different ResNet search spaces, leading to variations in search costs. [ZenNAS] populates massive ResNet candidates from a search pool and ranks them by varying zero-shot proxies such as Zen-Score, Synflow, and NASWOT. This process took 0.5 GPU days on a V100 GPU for 1M model (about 0.4 GPU days on A100). Contrarily, to set up the starting DNN for OTOv3, we independently constructed SuperResNets, as depicted in Figure 7 of Appendix. SuperResNets include the optimal architectures derived from [ZenNAS] and aim to discover the most suitable sub-networks using H2SPG over the automated search space. > **Comparison with other zero-shot methods.** We appreciate the reviewers for introducing us more literatures of training-free NAS and will include them into the revision. For a preliminary comparison, please see the below table. [TE-NAS] searches more efficiently, even under a 1080Ti GPU. Please note that we have not included [ZiCo], as it did not report CIFAR10 on DARTS, but will compare it with other experiments. > **Flexibility with zero-shot proxies.** Finally, we would like to highlight that OTOv3 focuses on autonomy and generality but not search cost. In fact, **OTOv3 is flexible to be integrated with zero-shot proxies for efficient search.** After establishing the search space of general DNNs, we could use training-free schemas to efficiently get sub-networks. This is feasible because **the redundant structure identification mechanism in OTOv3 is modular and is not restricted to gradient-based methods.** >| Method | Type | Search Space | Acc1 (%) | Params | GPU | Search Cost (GPU days) | >|--|--|--|--|--|--|--| >| Zen-Score-1M | Zero-Shot | ResNet Pool | 96.2 | 1M | A100 | 0.4 | >| Zen-Score-2M | Zero-Shot | ResNet Pool | 97.5 |2M | A100 | 0.5 | >| **TE-NAS** | Zero-Shot | DARTS | 97.4 | 3.8M | 1080Ti | **0.05**| >| ISTA-NAS | Grad | DARTS | 97.5 | 3.3M | A100 | 0.1 | >| PrDARTS | Grad | DARTS | 97.4 | 3.9M | A100 | 0.2 | >| OTOv3-2M| Grad | SuperResNet | 97.5 | 2M | A100 | 0.1 | > [ZenNAS] Zen-NAS: A Zero-Shot NAS for High-Performance Deep Image Recognition, ICCV 2021. > > [TE-NAS] Neural Architecture Search on ImageNet in Four GPU Hours, ICLR 2021. > > [ZiCo] ZiCo: Zero-shot NAS via Inverse Coefficient of Variation on Gradients, ICLR 2023. Sincerely, Paper 5388 Authors Pdf: /pdf/d067c021444775de73fde40138b85193cc0562fe.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Test-time Training for Matching-based Video Object Segmentation
Accept (poster)
Summary: This paper investigates several test-time training methods for improving matching-based video object segmentation algorithms. It proposes and compares three types of test-time training methods and finds that the mask cycle loss works the best. The mask cycle loss forward/backward propagates a mask and requires cycle consistency. The authors find that test-time training significantly improves performance, especially in sim-to-real transfer and in videos with corruption. Strengths: - Adapting to harder cases during inference has been a challenge for video object segmentation algorithms. Test-time adaptation (i.e., online learning) can be useful but most of the current online learning methods are limited to fine-tuning on the frame-level without considering the temporal aspect. This means those methods are of limited aid when the video contains fast and complex motion of the target object. Differently, the proposed cycle loss takes the temporal information into account which is helpful. - There are good analyses about how different models behave with different levels of data corruption. - I like the inclusion of MOSE results in the supplementary material. It is a real-world challenging dataset that has much harder examples than the training set and current methods have difficulties generalizing to it. This shows that the proposed method works for real-world applications and the result is encouraging. I would recommend putting this in the main paper. Weaknesses: - There is no discussion on the time required for the fine-tuning stage and how it affects the overall running time during inference. - The main results are presented with DAVIS-C which is synthetic. The results would carry more weight if done with the MOSE dataset instead. - It would be more complete if the authors include a discussion on the use of data augmentation during the finetuning stage. Related: Lucid Data Dreaming for Video Object Segmentation, IJCV 2019. - Missing related paper: Delving into the Cyclic Mechanism in Semi-supervised Video Object Segmentation, NeurIPS 2020 Technical Quality: 3 good Clarity: 3 good Questions for Authors: - Table 3 in the supplementary material: the two bars with 38.9 do not have the same height. Is this a typo? - Figure 3 – The mask predictions before test-time training look bad. Is that solely caused by the gaussian noise? Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Test-time training is slow. Also, cycle loss can lead to degenerate output (e.g., identical mask at every time step) which might be why the authors only train for 100 steps and that some outputs become worse after finetuning (Figure 7). Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > __Timings for the fine-tuning stage__ We thank the reviewer for bringing attention to this. In the supplementary, we report that one iteration of tt-MCC takes ~814 ms for STCN (supp-line68). Timing estimates for a single video (on average and on DAVIS) are as follows: it takes roughly 3 seconds to run STCN per video, and 81, 10.5 and 8.5 seconds to run tt-MCC, tt-Ent, and tt-AE, respectively, for 100 iterations. Note however that: a) TTT is run per video, i.e. TTT overhead is amortized and becomes less for longer videos; b) decent improvements can be obtained with less than 100 TTT iterations, e.g. see Table A of the rebuttal PDF; c) the timings above are estimated with our basic, non-optimized TTT implementation. During the rebuttal, for example, we managed to obtain a speed-up of 20%, while further optimization is definitely possible. Although applying TTT consistently might seem infeasible due to time overheads, what our paper shows is that it is highly useful for cases of extreme test-time distribution shifts. One should therefore also consider the case where we are given a few out-of-distribution examples and want to quickly adapt our current model to improve its performance on them. In such cases, TTT is a simple and cost-effective solution, far preferable to re-training the base model (it takes around 12.5 hours on 2 A100 GPUs to train STCN). We thank the reviewer for their comment and we promise to clearly report and discuss the time needed for test time training in the paper. > __The main results are presented with DAVIS-C which is synthetic. The results would carry more weight if done with the MOSE dataset instead.__ We agree that MOSE is a great new dataset (intricate occlusions, densely populated real-world settings). It is however not clear to us whether the reviewer suggests (a) to test on MOSE with corruptions, (b) to test on MOSE for the sim-to-real case, or (c) to test on MOSE for the case of no test-time distribution shift. With respect to (a), we agree that this would be valuable, but also very costly for experimentation since MOSE is many orders of magnitude larger than DAVIS and DAVIS-C contains 45 times more videos than DAVIS (for each original video we produce 15 corruptions over 3 levels). We can however consider this as a possible future update. With respect to (b), this is something already included in the supplementary material Figure 3; TTT is very valuable there as well. We can include this in the main paper if space allows. The case of (c) has not been the major focus of our paper, i.e. we focus on test-time distribution shifts. We did however run these experiments during rebuttal and the results are included in Table A of the rebuttal PDF with figures we submitted. There, we explore the use of a smaller number of iterations for TTT, which appears to be a better option for this setup and works for the case of testing on corruptions too. Let us further note here that it is common in the TTT literature to not report improvements for the case of no test-time distribution shift [18,35,39]; not decreasing the performance under no shift is typically considered an achievement ([39] states “Tent reaches the least error for most corruption types without increasing the error on the original data“). > __Discussion on the use of data augmentation during the finetuning stage. Related: Lucid Data Dreaming for Video Object Segmentation, IJCV 2019.__ We thank the reviewer for a great comment and a related work we missed. Inspired by that work, we tested the impact of standard image augmentations (color+geometric) on our TTT method. Their use had little impact over all cases. In particular, we observed some insignificant performance increase in the sim-to-real case on DAVIS, and a similarly insignificant drop in performance for videos with corruptions from DAVIS-C. Note that the authors of that work claim that “Ideally training data should be as similar as possible to the test data, even subtle differences may affect quality” which fully aligns with the proposed MCC loss that makes use of the actual future video frames instead of trying to hallucinate them. We will add this experiment and discussion in the paper. > __Missing related paper: Delving into the Cyclic Mechanism in Semi-supervised Video Object Segmentation, NeurIPS 2020__ Thank you. We will add the missing reference and discuss it. In short, the paper is using CC during training time, which allows to perform training while requiring masks only for the first video frame. This is similar to the way the cyclical loss is used in HODOR [1]. In contrast, we use CC for TTT, which allows us to perform TTT without any extra input than the one that is required by the task itself. > __Table 3 in the supplementary material: the two bars with 38.9 do not have the same height. Is this a typo?__ Thank you for pointing this out. It is indeed a typo. The correct number for the MOSE train is _42.9_ > __Figure 3 – The mask predictions before test-time training look bad. Is that solely caused by the gaussian noise?__ This is correct. Gaussian noise in itself is a quite challenging corruption. --- Rebuttal Comment 1.1: Comment: Thank you for the response. Do the authors observe degenerate output led by cycle loss with longer training (in limitation)? --- Reply to Comment 1.1.1: Comment: To better study the effect of longer training at test-time, we ran our method for up to 500 iterations on DAVIS for the sim-to-real case, extending Figure 7 from the main paper. In summary, we did not observe any significant further decrease in VOS performance nor any degenerate outputs after longer training. We will add this discussion in the paper. More specifically, the number of DAVIS videos for which VOS performance decreases remained the same from 50 TTT iterations up to approximately 400, i.e. the J&F score decreases in 7 out of 30 videos. Only one extra video showed a decrease in J&F score by 500 iterations. Visually observing segmentation results, we notice the same issues that we mention in our responses for Reviewers hK29 and 3wfH above: some background pixels of similar appearance to the object are wrongly segmented. It is also worth noting that increasing the iterations only leads to a minor overall decrease in VOS performance, i.e. J&F score drops from 81.2 to 81.0/80.4 for 200/500 iterations, respectively.
Summary: This paper points out that Current state-of-the-art approaches use a memory of already processed frames and estimate the segmentation masks of follow-up frames through matching. Lacking any adaptation mechanism, such methods are prone to test-time distribution shifts. To address this, this paper explores task-agnostic and VOS-specific test-time training strategies, including a mask cycle consistency-based variant tailored for matching-based VOS methods. The authors present a new dataset DAVIS-C and the proposed method is evaluated on the most common benchmarks, and demonstrate its effectiveness. Strengths: 1. Experimental results show that the proposed approach (MCC) significantly boosts the performance. 2. Paper is very easy to read and understand. Weaknesses: 1. Lack of technical contributions. Although mask cycle consistency usage at test time helps to boost the performance and enable more task-agnostic prediction, this seems to be a very similar technique that with minor modification (mask) from other works. 2. Just a question though. Why is XMeM not included in Fig 5? 3. Figure 6 and 7 are very hard to see and understand. I understand what these figures are trying to deliver, but to me, these figures definitely do not look so informative due to its seemingly unorganized graphs. 4. Ablations and limitations are missing. It would be much more informative to include failure cases as well. What would happen if the method extends from triplets, and use more images for MCC? Technical Quality: 3 good Clarity: 3 good Questions for Authors: See weaknesses above Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Limitations are missing Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > __Lack of technical contributions [...] this seems to be a very similar technique that with minor modification (mask) from other works__ Although Cycle Consistency (CC) has been used in the past, our method is the first one to have such a loss tailored for the task of one-shot VOS, i.e. for a setting where a single _groundtruth_ mask is also given. In this fashion, the CC loss in our case is using privileged information and is supervised in contrast to many cases in prior work where it is self-supervised. This, together with the fact that our method is the first to apply this _at test-time_ we believe makes our tt-MCC method a solid technical contribution. Let us further note that, beside the technical contribution of the tt-MCC method, our paper is the first one to study the effect of extreme test-time distribution shifts like sim-to-real and frame-level or video-level corruptions for the task of one-shot VOS in a principled way. Additionally, our paper is the first to study test time training for VOS and includes benchmarking of a number of TTT baselines to offer valuable insights and a new benchmark for the community to build on and evaluate. Until now TTT has been mostly studied on image classification. > __Why is XMeM not included in Fig 5?__ We did not include it because it cluttered the presentation, while providing little amount of additional information. The numbers are overall slightly higher, but the insights and trends remain. We will include this. > __Ablations and limitations are missing. It would be much more informative to include failure cases as well.__ We thank the reviewer for a valid comment. We believe that the rebuttal PDF and video significantly expand our exploration and analysis on limitations and failure cases. After extensive quantitative analysis (some cases also included in the rebuttal video), the key failure modes can be summarized as: a) With TTT, the model might get more/overly confident about a dubious prediction and propagate it wrongly (e.g. see monkey clip in the video). b) TTT might overfit the appearance module and wrongly add background pixels (e.g. see case of breakdancer, where a red shirt from the background confuses the model more than the STCN case). In Figure A of the rebuttal PDF, we provide some more in-depth analyses and a breakdown of the performance per video/object for the large MOSE-train dataset. In Figure A.a we evaluate performance gain with TTT vs the length of the video, as the reviewer suggested. We can observe that the video length has no significant impact on the performance. In Figure A.b, we present the performance gain vs the object size within the first frame and observe a slight correlation; for larger areas there is no negative gain. Inspired by the fact that the MOSE dataset contains examples where only a portion of the object is visible and annotated in the first frame due to occlusions, in Figure A.c we measure the impact of the size of the object in the first frame relative to the maximum size of the object within the video. A minor observation is that the gain is negatively impacted to a small extent when the object is less visible in the first frame compared to the subsequent frames. Unsurprisingly, the proposed tt-MCC overfits more to that partially annotated first frame, and usually segments the partial object in future frames instead of the full object. Note that extreme cases of severe occlusion in the first frame make such a VOS task ambiguous. Moreover, in Figure A.d of the rebuttal PDF we evaluate the impact of the object visibility within the video sequence, and confirm that tt-MCC is negatively impacted when the object is only briefly visible in the sequence. This could mean that by requesting long term temporal consistency, our CC loss imposes an inductive bias that the target object should be generally visible for a long period of time. Finally, Figure B of the rebuttal PDF explores temporal stability and presents a performance comparison over time on a per video basis, with and without TTT and also displays some failure cases. We will include all these additional analyses in the paper. > __What would happen if the method extends from triplets, and use more images for MCC?__ We generally followed exactly the same design that each base method uses during training and adopted the use of triplets for STCN and octuplets for XMem. We briefly explore performance using longer sequences for STCN in the supplementary material (lines 60-62): Using quadruplets or quintuplets instead of triplets reaches 81.7 and 82.4, respectively, but with a corresponding increase of 33% and 64% in training time, i.e., in summary, having longer sequences does bring some minor improvements, but with the cost of additional TTT time. We can discuss this in the main paper if reviewers think we should. Let us also note that, instead of increasing the number of images, one can achieve longer temporal consistency by modulating the value of the jump step $s$: In the supplementary, we validate its impact (supp-line 55) to observe that neither very small nor very large values are the optimal choices: Varying the jump steps from the default value of 10 to 1, 2, 25, and 50 achieves 80.6, 81.0, 80.8, and 79.0, respectively. See paragraph “Impact of triplet sampling” in Section C of the supplementary for further exploration of the sequence sampling.
Summary: This paper focuses on test-time training for matching-based VOS methods. Three methods are presented, including entropy loss, auto-encoder loss and cycle consistency loss. The cycle-consistency loss is tailored for VOS and utilizes the first-frame mask for supervision. Two evaluation protocols are provided, the sim-to-real and DAVIS-C. Results show that the proposed tt-MCC method can help matching-based VOS methods to gain better performance under sim-real shifts and corrupted videos. Strengths: - The proposed method appears to be straightforward to implement and potentially effective in conjunction with matching-based VOS methods. - The proposed DAVIS-C benchmark is a valuable contribution that can aid the community in advancing their research. Weaknesses: - **The proposed tt-MCC focuses solely on triplets and overlooks longer temporal consistency.** Methods that rely on short-term memory (STM) usually utilize multiple frames in their memory pools, allowing for aggregating long-term temporal information. However, tt-MCC only employs two frames for each training step, potentially limiting its exploitation of temporal consistency. - **The concept of cycle consistency has been extensively explored and widely employed in various video-related fields.** In light of this, the proposed method is not groundbreaking. Moreover, a recent study [a] shares a similar idea and asserts that their approach outperforms cycle consistency (as shown in Table 7). The authors may want to highlight the disparities and advantages of tt-MCC compared to the method proposed in [a]. - **The results for XMem-YD or STCN-YD + tt-MCC under no-corruption conditions on large-scale benchmarks like YouTube-VOS and MOSE are missing.** Additionally, synthetic datasets are not as commonly used as static images for pretraining. It would be beneficial to examine whether tt-MCC aids matching-based VOS methods after static pretraining. [a] Boosting Video Object Segmentation via Space-time Correspondence Learning, CVPR2023 Technical Quality: 3 good Clarity: 2 fair Questions for Authors: See Weaknesses. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: The method does not include a mechanism for implementing early stopping and determining the optimal iteration to stop training. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > __The proposed method focuses solely on triplets and overlooks longer temporal consistency.__ We generally followed exactly the same design that each base method uses during training and adopted the use of triplets for STCN and octuplets for XMem. We briefly explore performance using longer sequences for STCN in the supplementary material (lines 60-62): Using quadruplets or quintuplets instead of triplets reaches 81.7 and 82.4, respectively, but with a corresponding increase of 33% and 64% in training time, i.e., in summary, having longer sequences does bring some minor improvements, but with the cost of additional TTT time. We can discuss this in the main paper if reviewers think we should. Let us however also note that one can achieve longer temporal consistency even using triplets, by modulating the value of the jump steps: In the supplementary, we validate its impact (supp-line 55) to observe that neither very small nor very large values are the optimal choices: Varying the jump steps from the default value of 10 to 1, 2, 25, and 50 achieves 80.6, 81.0, 80.8, and 79.0, respectively. See paragraph “Impact of triplet sampling” in Section C of the supplementary for further exploration of the sequence sampling. > __Cycle consistency has been extensively explored - relation to [a] “Boosting Video Object Segmentation via Space-time Correspondence Learning” (CVPR 2023)__ We thank the reviewer for pointing out an interesting related work that is concurrent to ours. The key differences between [a] and our approach are: a) unlike our paper, [a] does not study the case of test-time adaptation of a model for VOS, and b) we use the cycle consistency on the _mask_ labels, not on the pixels or regions. [a] builds on other related papers like [16] where space-time consistency is imposed on the rgb pixels and further extends it to regions, but does not use the mask as a label. > __The results for XMem-YD or STCN-YD + tt-MCC under no-corruption conditions on large-scale benchmarks like YouTube-VOS and MOSE are missing__ We report the missing results in Table A of the rebuttal PDF, for a different number of test time training iterations. In the main paper, our core focus is on the case of test-time distribution shifts, and we only report results for 100 iterations which we found to be best for all cases with distribution shifts. For the case where there is no distribution shift, however, we see from Table A that, unsurprisingly, using less iterations could be beneficial, especially for the larger scale YTVOS and MOSE datasets. Overall we see that TTT does not hurt the state of the art methods in this case, but instead gives us a way of either preserving or improving SoTA performance in all cases of test-time videos, with or without distribution shift, something important in practice. We thank the reviewer and we will add these results. Note that a large number of existing papers for TTT in classification ([18][35][39]) report no or insignificant improvements for the case of no test-time distribution shifts. Actually, not decreasing the performance under no shift is typically considered an achievement ([39] states “Tent reaches the least error for most corruption types without increasing the error on the original data“) > __examine whether tt-MCC aids matching-based VOS methods after static pretraining__ We thank the reviewer for a valuable additional experiment. We explored this during rebuttal and report results in the right part of Table A in the rebuttal pdf, for STCN and XMem on 4 datasets. We conclude that tt-MCC provides significant improvements in this setup too. We will add these experiments. --- Rebuttal Comment 1.1: Comment: I sincerely thank the authors for providing a detailed response to my concerns. Given that the concept of Cycle Consistency has already been extensively studied in various tasks, including semi-supervised VOS, the originality of the approach in this paper is somewhat diminished. However, the experimental results presented in both the paper and the rebuttal adequately demonstrate the effectiveness of the proposed training strategy on STM-based methods [5,7]. Therefore, I am inclined to stick with the initial borderline accept score. Questions raised during rebuttal: - It appears that the author has only validated the effectiveness of their proposed method on STM-based VOS methods [5,7] without conducting experiments on other types of VOS methods, such as recent AOT-based methods [46,48]. Is it possible that the proposed method is only applicable to STM-based methods? --- Reply to Comment 1.1.1: Title: Official Comment by Authors Comment: This is a good question. The main variant, tt-MCC, is well-suited for all matching-based methods that adhere to the function $\hat{m}_j = f(x_j, M)$, where $M$ represents the memory of previous frame predictions (line 120 of our paper). Upon revisiting the AOT method, we conclude that its characteristics align with the requirements of our tt-MCC formulation. However, we recognize that the integration of tt-MCC into the AOT implementation is a task that transcends the scope of the rebuttal period. We view this potential integration as an interesting task for future research or potential extensions of our own work. In contrast, tt-Ent, another proposed variant, possesses an even broader applicability beyond the realm of matching-based methods. Regarding tt-AE, as stated in our paper, it is tailored to the STCN architecture and does not possess a direct out-of-the-box applicability to AOT. However, we acknowledge the possibility of a modified variant that could be designed to align with the AOT architecture. While we have not yet conducted an exhaustive analysis of all the aspects involved, we believe that a carefully tailored adaptation is plausible. Moreover, please refer to our earlier response to reviewer iPsr which states that “our formulation is generic and applicable to a wide range of matching-based VOS methods, including ones that require only a single forward pass for multiple objects, e.g. AOT [48].“.
Summary: This paper revisits test-time training in video object segmentation (VOS) and introduces three losses (entropy loss, mask auto-encoder loss, and mask cycle consistency loss) that significantly enhance top-performing methods, particularly under extreme distribution shifts between training and testing sets. Additionally, it introduces DAVIS-C, a variant of the DAVIS test set with extreme distribution shifts, demonstrating great performance on this challenging dataset. Strengths: 1. The proposed method enhances test-time training for matching-based video object segmentation (VOS) and achieves significant performance improvements for state-of-the-art methods in scenarios with extreme distribution shifts (e.g., synthetic to real data, stylization, and video corruptions). 2. The paper introduces the DAVIS-C dataset, specifically designed to evaluate model performance under extreme distribution shifts during test time. 3. The proposed method achieves a 70%-80% recovery in performance gain by training solely on synthetic data, without the need for video annotations during offline training. Weaknesses: Please see my comments in below Questions section. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: 1. It would be beneficial to clarify how the proposed model can be extended to handle multiple objects. Specifically, it is important to address whether multiple object segmentation can be achieved in a single pass during the inference step or if multiple forwards are required. 2. I kindly request that the authors report the finetuning time alongside Figure 7. It would be more informative to have the exact time instead of just the number of training iterations. This information is valuable since one of the main drawbacks of test-time training methods is the potentially long finetuning time during test runs. Additionally, please clarify if the reported average inference time with TTT per frame includes the finetuning time, as mentioned in lines 227-228. 3. Please kindly provide information regarding the memory requirements of the proposed method. Since it stores both mask and frame representations, it would be helpful to know if there is a threshold or mechanism to manage memory usage and if any data is dropped when reaching that threshold. 4. In lines 129-131, please provide clarification regarding the similarity matching metric utilized to identify similar items. 5. In line 160-166, the last frame prediction is dependent on the second frame prediction, I am wondering if this type of "dependency" is necessary? For example, is it possible to predict both the last mask and the second mask based solely on the first mask? Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: NA Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > __Clarify how the proposed model can be extended to handle multiple objects__ The proposed test-time training (TTT) method is a way of performing test-time adaptation over a given base method (STCN/XMem in our paper). It does not change the way the base method does inference, and for STCN and XMem, for example, multiple objects are handled sequentially. Nevertheless, our formulation is generic and applicable to a wide range of matching-based VOS methods, including ones that require only a single forward pass for multiple objects, e.g. AOT [48]. In such a case, multiple objects would be handled via a single forward pass for TTT as well. > __Finetuning/TTT time__ We thank the reviewer for bringing attention to this. The average inference time reported in lines 227-228 of the main paper is _after_ TTT. We tried to clarify timings better in the supplementary, i.e. see lines 66-71; we apologize for any confusion. In the supplementary, we report that one iteration of tt-MCC takes ~814 ms for STCN (supp-line68). With this information, one can translate the x-axis of Figure 7 from iterations to seconds; we will update this in the final version. Timing estimates for a single video (on average, on DAVIS) are as follows: it takes roughly 3 seconds to run STCN per video, and 81, 10.5 and 8.5 seconds to run tt-MCC, tt-Ent, and tt-AE, respectively, for 100 iterations. Note however that: a) TTT is run per video, i.e. TTT overhead is amortized and becomes less for longer videos; b) decent improvements can be obtained with less TTT iterations, e.g. see Table A of the rebuttal PDF; c) the timings above are estimated with our basic, non-optimized TTT implementation. During the rebuttal, for example, we managed to obtain a speed-up of 20%, while further optimization is definitely possible. Although applying TTT consistently might seem infeasible due to time overheads, what our paper shows is that it is highly useful for cases of extreme test-time distribution shifts. One should therefore also consider the case where we are given a few out-of-distribution examples and want to quickly adapt our current model to improve its performance on them. In such cases, TTT is a simple and cost-effective solution, far preferable to re-training the base model (it takes around 12.5 hours on 2 A100 GPUs to train STCN). Overall, we fully agree with the reviewer that long fine tuning times are a key drawback for test-time training methods in general, and one that the community does not even touch: Most TTT papers do not have any discussion on timings, although it is something important to report and discuss. We thank the reviewer for their comment, and we promise to clearly report and discuss the time needed for test time training in the paper. >__Memory requirements__ During inference, STCN requires approx. 8GB of GPU RAM. Test-time training on top of STCN with tt-AE, tt-Ent and tt-MCC requires 8,16 and 23.5GB, respectively, i.e. it can still fit in a modest GPU. Note that the calculations for TTT are computed when using a batch size of 4 (our default). Memory requirements can be further reduced by using a smaller batch size, if needed, without significant change in performance. We thank the reviewer for such comments on timings and memory; we will make sure to clearly report and discuss the added resources needed for test time training. > __Clarification regarding the similarity matching metric__ The matching part is inherited from the base method. The technical details are exactly the same as in STCN and are described in the original paper. In particular, the full similarity matrix is formed between the test frame items and the memory items but only the top-k values per row (rows correspond to the test frame) are maintained, the rest are set to 0. By the term item we refer to each D_e-dimensional vector, with each frame having WxH of them. > __Is it possible to predict both the last mask and the second mask based solely on the first mask?__ Yes, this is technically possible. We did not experiment with that, as we adopt the design choices of STCN. We expect its performance to not vary significantly. --- Rebuttal Comment 1.1: Comment: After reviewing the rebuttal and considering the comments from the other reviewers, I have increased my rating to Weak Accept. I value the author's response regarding test time and memory concerns. Kindly ensure that this aspect is reported and discussed clearly in the revised version, as promised. This is of great importance for a method of this nature. --- Reply to Comment 1.1.1: Title: Thank you for raising your rating Comment: We thank the reviewer for acknowledging our rebuttal and for increasing their rating. We will include a clear discussion on test time and memory consumption in the main paper as promised.
Rebuttal 1: Rebuttal: We would like to thank all five reviewers for insightful and constructive reviews. We are pleased that the feedback is __overall positive__. Four out of five reviewers highlighted the significant performance improvements of our proposed method under test-time distribution shifts. We are happy that they also found our paper “highly interesting” and with “significant practical implications” (__R-3wfH__), as well as our paper “easy to read and understand” (__R-hK29__). Reviewers further appreciated the introduction of the DAVIS-C benchmark (__R-iPsr__), calling it a “valuable contribution” for the community (__R-dPR9__). They highlighted our analyses over different levels of frame-level corruptions (__R-jwwD__) and the large improvements our method can achieve in such cases (__R-3wfH__). We are responding to each reviewer individually below, addressing all of their questions and comments. We will additionally send a video to the AC (referred to as __rebuttal video__ in the responses below) using the provided functionality, that summarizes the qualitative performance before and after our TTT for success, failure and other corner cases. We further attach to this response a one-page rebuttal PDF document with additional results and plots (referred to as __rebuttal PDF__ in the responses below) that help us respond to reviewer comments. It contains: * Table A with results for STCN and XMem on four datasets, for the cases of training from static images and real videos, as well as for a reduced number of iterations of test-time training. * Figure A with four different statistical analyses for the sim-to-real case on the large MOSE dataset, demonstrating the performance of the proposed approach versus video length, object size as well as other key elements that help us demonstrate better where our method helps and where it fails. * Figure B with plots that compare performance over time for a few videos, that justify the improved temporal stability of the proposed method and also demonstrate success and failure cases. We hope that our responses below, together with all the additional qualitative and quantitative results and analysis, address the reviewers’ comments in a satisfactory way. We are more than happy to engage in further discussion and clarify any points that might still remain unclear. Pdf: /pdf/1e2024f96be051a71c6914646aacfe243aa92092.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: The paper introduces an adaptation/test-time training algorithm for matching-based video object segmentation, specifically addressing the challenges posed by out-of-distribution videos(corruptions, stylization, and sim-to-real transfer). The proposed framework incorporates test-time training with various losses (entropy, mask consistency, etc.) The experimental evaluation includes benchmark datasets like DAVIS and youtube-VOS, where the authors demonstrate performance improvements for out-of-distribution scenarios. Strengths: - The proposed direction is highly interesting and carries significant practical implications. - The results obtained are quite promising. Notably, there are substantial improvements observed across multiple datasets, particularly in the case of corrupted DAVIS. These outcomes highlight the effectiveness and value of test-time training in this context. Weaknesses: The technical aspects of the paper may appear somewhat limited. While concepts such as test-time training and loss are well-known techniques, the actual technical contribution seems marginal. It would be beneficial to explicitly highlight the distinctions between this work and previous approaches. The validation process also seems somewhat limited, which undermines the full convincing power of the proposed method, in particular: - The absence of qualitative results in video formats makes it difficult to assess the practical impact and temporal stability of the improvements achieved. - The resemblance of DAVIS-C to real-world noise is uncertain, as the augmentation techniques seem quite artificial. Additionally, it may be difficult to reproduce the study on DAVIS-C, and the argumentation process is not fully clear. - Providing more in-depth analysis on intermediate/breakdown results would be valuable. For instance, exploring the performance relative to video length, examining corner cases where the method fails, and identifying scenarios where the proposed approach is most beneficial. Currently it's hard to gain further insights from the results. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: Please discuss the questions raised above. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: It has been discussed however not fully clear what the failure mode of the approach would be, might be helpful to conduct more analysis on this. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > __Distinctions between this work and previous approaches__ Although Cycle Consistency (CC) has been used in the past, our method is the first on to have such a loss tailored for the task of one-shot VOS: Unlike methods like “Space-time correspondence as a contrastive random walk” [16] or the concurrent work “Boosting Video Object Segmentation via Space-time Correspondence Learning” from CVPR 23, we do not apply the cycle-consistency loss on RGB pixel values, but instead on the values of the mask in the first frame. This means that, unlike space-time correspondence CC losses, ours is not self-supervised. Together with the fact that our method is the first to apply this _at test-time_ we believe makes our tt-MCC method a solid technical contribution. Unlike other methods like HODOR [1] that might employ a cycle consistency loss during training, our method tailors this to test time training and uses the GT frame that is provided at test-time. Concurrent work [49] also studies test time training (TTT) in the video domain but for a classification task; we instead fully tailor the TTT on the VOS task and use temporal _mask consistency_ as our loss. We hope that this answer covers the differences between the proposed and the closest related works, if the reviewer has another in mind, please let us know. Beside the technical contribution of the tt-MCC method, we want to further note here that our paper is the first one to study the effect of extreme test-time distribution shifts like sim-to-real and frame-level corruptions for the task of one-shot VOS in a principled way. Additionally, our paper is the first to study test time training for VOS and includes benchmarking of a number of TTT baselines to offer valuable insights and a new benchmark for the community to build on and evaluate. Until now TTT has been mostly studied on image classification. > __Absence of qualitative results in video formats makes it difficult to assess the practical impact and temporal stability__ This is a great point. We are providing multiple qualitative results in the rebuttal video shared with the AC. It enables comparing results with and without TTT visually for a number of corner cases. From the rebuttal video and after extensive quantitative analysis, the key failure modes can be summarized as: a) With TTT, the model might get more/overly confident about a dubious prediction and propagate it wrongly (e.g. see monkey clip in the video). b) TTT might overfit the appearance module and wrongly add background pixels (e.g. see case of breakdancer, where a red shirt from the background confuses the model more than the STCN case). Regarding temporal stability, we include in Figure B of the rebuttal PDF a performance comparison over time on a per video basis, with and without TTT. It demonstrates enduring performance gains. Notably, it also shows that the performance with TTT is more stable compared to that without TTT. > __The resemblance of DAVIS-C to real-world noise is uncertain__ Thank you for a valuable comment. We agree that these transformations might not perfectly represent the outcome of video recording in the real-world. Nevertheless, we believe that several of the transformations do constitute common real-world video edits (contrast, brightness, crf compression, cartoonization, stylization). We follow the footsteps of ImageNet-C which has been a valuable benchmark to study image classification under distribution shift. We believe that this benchmark can lead to valuable insights, similar to what ImageNet-C achieved for image understanding. > __it may be difficult to reproduce the study on DAVIS-C__ We are committed to releasing the DAVIS-C dataset, the code to regenerate the dataset, and the code for this study, which will make it possible to not only reproduce this study, but also make it easy to evaluate any future method on the same benchmark in a fair way. > __More in-depth analysis on intermediate/breakdown results would be valuable__ Great suggestion. In Figure A of the rebuttal PDF, we provide some more in-depth analyses and a breakdown of the performance per video/object for the large MOSE-train dataset. In Figure A.a we evaluate performance gain with TTT vs the length of the video, as the reviewer suggested. We can observe that the video length has no significant impact on the performance. In Figure A.b, we present the performance gain vs the object size within the first frame and observe a slight correlation; for larger area there is no negative gain. Inspired by the fact that the MOSE dataset contains examples where only a portion of the object is visible and annotated in the first frame due to occlusions, in Figure A.c we measure the impact of the size of the object in the first frame relative to the maximum size of the object within the video. A minor observation is that the gain is negatively impacted to a small extent when the object is less visible in the first frame compared to the subsequent frames. Unsurprisingly, the proposed tt-MCC overfits more to that partially annotated first frame, and usually segments the partial object in future frames instead of the full object. Note that extreme cases of severe occlusion in the first frame make such a VOS task ambiguous. Finally, we evaluate the impact of the duration of the object visibility within the video sequence in Figure A.d and confirm that tt-MCC is negatively impacted when the object is only briefly visible in the sequence. This could mean that by requesting long term temporal consistency, our CC loss imposes an inductive bias that the target object should be generally visible for a long period of time. --- Rebuttal Comment 1.1: Comment: I would like to thank authors' for the feedback. Especially, I appreciate the discussion to contrast previous approaches, and additional analysis in figure A After reading the rebuttal as well as other reviews, I still have some concerns regarding the validation, and effectiveness, therefore I have adjusted my rating and am inclined to vote for rejection. --- Reply to Comment 1.1.1: Title: Thanks for the rating increase - happy to provide any further clarifications possible Comment: We thank the reviewer for the response and for the rating increase. We would further appreciate it if the reviewer can share details regarding the remaining concerns so that we can provide additional clarifications. The reviewer's initial concerns regarding validation and effectiveness, as well as a brief summary of our responses to each one of them follows. * __“qualitative results and temporal stability”__: we provided an anonymized video with a large number of qualitative results for different success and corner cases, as well as Figure B of the rebuttal PDF showing that TTT is at least as stable as the base approach. * __"type of augmentations and reproducibility of DAVIS-C"__: we motivated the realistic aspect of the augmentations and guaranteed the reproducibility of the dataset and approach by committing to publicly share both implementations and the dataset itself. Additionally, please note that DAVIS-C is only one part of the evaluation study. The sim-to-real is the second part which was now strengthened with the case of training on static images as requested by Reviewer dPR9, showing that our methods are _effective_ also in this case. * __”more in-depth analysis”__: Figures A and B in the rebuttal PDF break down performance across different aspects (video length, object area, visibility duration) and report results across time, respectively. We are glad that the reviewer stated that they appreciate it.
null
null
null
null
null
null
Robust Learning with Progressive Data Expansion Against Spurious Correlation
Accept (poster)
Summary: The paper describes Progressive Data Expansion (PDE), a new method for training models which are robust to spurious features. The idea of the method is to split the training into two phases: warmup and expansion. In the warmup stage, the dataset is balanced, and the model learns a classifier that ignores the spurious feature. In the expansion stage, the remaining data is used to finetune. The authors also provide a theoretical argument about feature learning, and show strong empirical performance on benchmark datasets. **W** &mdash; weakness, **S** &mdash; strength, **Q** &mdash; question. Strengths: **S1.** The authors provide a theoretical analysis, that provides some intuition for the method **S2.** The proposed method is simple and computationally cheap **S3.** The authors achieve strong empirical performance across three benchmark datasets Weaknesses: **Q1/W1. Relationship to DFR** The works [1, 2] advocated for an approach that is almost opposite to PDE: they train a model with standard ERM, and then retrain only the last layer on a group-balanced dataset. PDE on the other hand pretrains the model on a group-balanced dataset, and then finetunes with standard ERM. The methods achieve similar performance, if we consider DFR^Val, which is the main method in [1] (the authors argue that DFR^Val uses the validation data, but I believe PDE also uses the same amount of validation data to tune the hyper-parameters?). In fact, PDE achieves better performance on CelebA and CivilComments, while DFR achieves better performance on waterbirds. However, conceptually [1, 2] argue that in fact standard ERM learns the core features as well as GroupDRO on the standard benchmarks, and once the last layer is retrained we can recover the performance of GroupDRO. This intuition is quite the opposite of the intuition presented in this paper, which is that ERM does not learn the core features without intervention. I think it would be good if the authors could comment on this distinction. My intuition is that the pretraining in the warmup stage prevents the model from learning the spurious feature, and makes it harder for the model to use the spurious features in the expansion stage. Alternatively, it could be that the model still represents the spurious features, but because of the momentum, it is unable to switch to using this feature in the expansion phase. It would be interesting to explore more deeply which of these options is happening. Lastly, in line 287 you mention that the method is more efficient than DFR, because it does not train the model twice. However, DFR only retrains the last layer of a pretrained ERM model, which is computationally cheap, so I don't believe this statement is correct. **Q2/W2. Robustness to the choice of hyper-parameters** It is important to understand how sensitive the method is to the choice of hyper-parameters. In particular, I would be interested in the impact of - Length of the expansion training phase - Learning rate in the expansion phase My intuition is that the method continues going in a good direction for some time during the expansion phase, but then eventually it should converge to an optimal solution on the full data distribution, i.e. maximize the average accuracy and not the worst group accuracy. However, due to early stopping, the authors are able to achieve good performance, before the optimization learns to use the spurious feature. Is this intuition correct? **W3. Theoretical analysis** The theoretical analysis is quite toy. The authors use a model which is basically a bag of features, i.e. the result is a sum of cubed dot-products of features and "filters". This model is not very non-linear. However, it is understandable, given that theoretical analysis of non-linear models is challenging. Technical Quality: 3 good Clarity: 3 good Questions for Authors: **Q3. Expansion phase:** In line 225 you mention that during the expansion phase you try to sample data uniformly across groups. What do you do exactly in this phase? Isn't all of the minority data used in the warmup phase? **Q4. Intuition for expansion phase:** Related to Q3, why is performance on the training data distribution go down in Fig 4 (b), even when you reset the momentum variables? Is the training loss going up then? Why would this happen? Or is the data not just sampled from the training distribution early in the expansion phase? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: **References** [1] [_Last Layer Re-Training is Sufficient for Robustness to Spurious Correlations_](https://openreview.net/forum?id=Zb6c8A-Fghk); P. Kirichenko, P. Izmailov, A. G. Wilson; ICLR 2023 [2] [_On Feature Learning in the Presence of Spurious Correlations_](https://proceedings.neurips.cc/paper_files/paper/2022/hash/fb64a552feda3d981dbe43527a80a07e-Abstract-Conference.html); P. Izmailov, P. Kirichenko, N. Gruver, A. G. Wilson; NeurIPS 2022 Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate your positive feedback and suggestions that helped us improving our work. We hope our clarifications answer your questions. --- **Q1a**: Consider DFR^Val. **A1a**: Thanks for pointing this out. We agree that all methods use the information from validation data for hyper-parameter tuning and model selection, including PDE and DFR^Tr. Yet, there is still a difference for DFR^Val that it also **trains the last layer on validation data**, which is updating the model parameters in addition to hyper-parameters. For a comprehensive comparison, we've updated our table (as in Table 2 of the attached PDF) to include DFR^Val, with a notation to distinguish whether the validation data is used solely for hyperparameter tuning or also for updating the final layer of the model itself. **Q1b**: The intuition for DFR is quite the opposite of the intuition in this paper. **A1b**: We believe that PDE and DFR approach the problem differently but don't contradict each other in their understandings. Our theory indicates that, for ERM, the learning of spurious features will quickly overshadow that of core features, resulting in the reliance on spurious features when making predictions. As in Theorem 2.2, the learning of spurious feature will have surpassed a threshold while the learning of core feature remains at the same scale as initialization. That said, we don't intend to assert that ERM won't learn the core feature at all. Despite the dominance of spurious feature in prediction, the model will still learn the core feature at a minimal rate compared to spurious. We believe that there is no contradiction between us and the intuition of DFR to recover and amplify this learning of core feature through later interventions. To reflect this, we will revise the wording of Theorem 2.2 to make it more rigorous. However, in PDE, the case is very different as spurious feature in balanced dataset is not useful and its gradient remains 0. The model only tends to learn spurious feature in the expansion stage if without any further intervention, as the expansion data is not group-balanced. Hence, we need momentum from warmup, so that we use the core feature preserved in momentum to amplify the core feature gradient in expansion data so that it continues to dominate the spurious feature. Our arguments are also supported by synthetic experiments. In Figure 3a of our paper, there is an observable minimal learning of core feature for ERM. Meanwhile, in Figure 3c, the model does not learn spurious feature during warmup where the groups are balanced. **Q1c**: Efficiency comparison with DFR. **A1c**: We acknowledge and appreciate the efficiency of the fine-tuning approaches. We’ll clarify this in our manuscript to reflect that DFR retrains the last layer only. However, while it's true that re-training the final layer doesn't demand significant additional computation, it still requires first training a model using ERM. Meanwhile, PDE itself converges much faster to its optimal performance as compared to the ERM stage of DFR. **We still firmly believe that our statement on the efficiency of PDE is correct.** --- **Q2**: Robustness to the choice of hyper-parameters. **A2**: We appreciate the suggestion and added ablation studies in Table 4 the PDF. Namely, PDE is robust within a reasonable range of hyperparemeter selections. Meanwhile, there are preferred choices for the hyperparameters. - The number of data added during each expansion cannot be too large as it will harm the performance. As similarly demonstrated in Figure 6 of Appendix A, gradual expansion is important to our method and adding all data at once will result in a worse performance. - There is a necessity to decay the learning rate after the warmup stage, while PDE is relatively robust to the extent of decay. - The number of times for data expansion depends on the early-stopping on the validation data. As in Figure 1 in the PDF, a smaller learning rate will result in more data expansions. However, a smaller learning rate does not necessarily result in a better performance. We will add the ablation studies and corresponding discussion to the revised version of our work. --- **Q3**: Isn't all of the minority data used in the warmup phase? **A3**: We'd like to clarify our methodology related to the usage of training data, particularly during the warmup stage. In this initial stage, we employ all data from the smallest group in the training dataset (waterbirds, land background), while for all other groups, we select a random subset equivalent in size to this smallest group. Post-warmup, we add additional data from the training dataset that were not seen during the warmup stage. At this point, new data from other small groups still remain (landbirds, water background). If possible, we aim to maintain balance among the remaining groups when enlarging the training dataset. Ultimately, the newly added data will exclusively come from the largest group (landbirds, land background). We will make sure to explain this more clearly in our revision. --- **Q4**: Is the data not just sampled from the training distribution early in the expansion phase? **A4**: To clarify, our method uses the training data (and not the validation set) for warmup and expansion stages, with the intent to include all training data by the end of the expansion stage. While the data used in the expansion phase is drawn from the training dataset, remains new to the model, having not been used during the warmup stage. As we add new data, there's a minor increase in training loss, which quickly drops as training advances. Regarding the referenced figure, both average accuracy and worst-group accuracy are plotted on test data. As we use this figure to highlight the importance of momentum, we want to focus on model's actual performance as opposed to the training. We believe that the performance on test data will not always align with the trend of training loss as well. --- Rebuttal Comment 1.1: Title: Thank you for the rebuttal! Comment: Dear authors, thank you for the rebuttal! I appreciate the new results on the robustness of performance with respect to the expansion period training length, and other clarifications.
Summary: In this paper, the author propose a learning framework for improving the performance of classifiers on the worst group in the presence of spurious features. The authors focus on a two-layer convolutional neural network. They first show that imbalanced data and easy-to-learn spurious features can lead to bias in the classifier. Then, they propose a learning procedure that focuses on balanced data and gradually progresses how the model learns from core features to all features. The new method is evaluated on several datasets demonstrating that the new approach is fast and can improve accuracy on worst groups. Strengths: The paper is mostly well-written and easy to follow, and the idea makes sense. The proposed approach is motivated theoretically using a two-layer CNN model. The method is simple and relies on a two-stage training procedure, therefore can be easily implemented and adapted to other algorithms. The proposed method improves the performance of the worst group in all evaluated examples; in some cases, it leads to significant improvements. Weaknesses: The theoretical analysis in the paper is focused on binary classification. The method requires tuning several hyperparameters. While the worst group performance is improved in most cases the overall accuracy degradation is non negligible. The method is only evaluated on a small number of datasets. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: How is the method applied to non-binary data? The analysis focuses on the activation z^3. Can this be replaced with more commonly used activations? Why is there a significant drop in average accuracy in some examples? Some parts are not well explained; for example, the caption in Figure 1 is unclear. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: Limitations are discussed in the last section of the paper. Addressing some of the limitations presented by the authors is indeed important to increase the usability of the proposed approach. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate the feedback and questions raised on the potential confusions. Please find our detailed response below, for which we hope that the reviewer considers increasing their evaluation of our work. --- **Q1**: The theoretical analysis in the paper is focused on binary classification. The method requires tuning several hyperparameters. **A1**: We have indeed recognized the aforementioned points in our limitation section and believe they offer valuable directions for future research. We would like to further emphasize that the previous papers [1-4] also focused on the binary classification problem. To the best of our knowledge, no existing study has studied multi-class classification problems in the theoretical analysis of spurious correlations. Meanwhile, we consider the more complex case of a non-linear CNN, addressing data distributions that conventional linear models find challenging. We also want to kindly remind the NeurIPS guidelines on the limitation discussion that “authors should be rewarded rather than punished for being up front about the limitations of their work.” [1] Sagawa et al. "An investigation of why overparameterization exacerbates spurious correlations." [2] Chen et al. "Self-training avoids using spurious features under domain shift." [3] Yang et al. "Understanding rare spurious correlations in neural networks." [4] Ye et al. "Freeze then train: Towards provable representation learning under spurious correlations and feature noise." --- **Q2**: While the worst group performance is improved in most cases the overall accuracy degradation is non negligible. Why is there a significant drop in average accuracy in some examples? **A2**: Thanks for raising the question. We wish to clarify that the research on spurious correlations primarily aims to develop a **robust and reliable predictor**, and therefore **worst-group accuracy is regarded as the main evaluation metric** rather than striving for an even higher average accuracy than ERM. This can be found in previous studies [1-5], which highlighted the importance of improving worst-group accuracy and similarly exhibited such “trade-off”. The state-of-the-art methods like GroupDRO and DFR all improves worst-group accuracy while dropping the average accuracy. Furthermore, the decrease in the average accuracy is not necessarily considered as a drawback in the context of spurious correlations. The presence of spurious correlations can mislead us into perceiving a high average accuracy as an indicator of a reliable predictor. However, a model may perform exceptionally well on larger groups, yet fail smaller ones. In contrast, the gap between average accuracy and worst-group accuracy, as outlined in [6], can help identify if a model is overly influenced by spurious features. For a more straightforward comparison, we updated a table comparing the gaps for all models in the 1-page pdf in Table 3. The objective of creating a more dependable model is therefore to minimize this gap while enhancing worst-group accuracy. --- **Q3**: The method is only evaluated on a small number of datasets. **A3**: We have included the most common benchmark datasets used for spurious correlations as in previous works [1-6]. Admittedly, the benchmark datasets for spurious correlation are limited, where Waterbirds and CelebA have been the most used data. We believe the development of more benchmark datasets specifically designed to evaluate models' performance on spurious correlations is an important future work. However, within the scope of our current study, we have endeavored to provide a comprehensive evaluation using the most prevalent datasets available. [1] Sagawa et al. "Distributionally Robust Neural Networks." [2] Liu et al. "Just train twice: Improving group robustness without training group information." [3] Creager et al. "Environment inference for invariant learning." [4] Zhang et al. "Correct-N-Contrast: a Contrastive Approach for Improving Robustness to Spurious Correlations." [5] Kirichenko et al. "Last Layer Re-Training is Sufficient for Robustness to Spurious Correlations." [6] Haghtalab at al. "On-demand sampling: Learning optimally from multiple distributions." --- **Q4**: The analysis focuses on the activation z^3. Can this be replaced with more commonly used activations? **A4**: Yes, it indeed can be replaced with the activations such as ReLU, as demonstrated by other works applying a similar analysis but for different problems [1]. The cubic activation has been widely used in the other theoretical works [2] for the simplicity of analysis and results that align with experiments. The inclusion of ReLU activation, while feasible, will make our analysis more intricate, especially in terms of proof details. Our primary goal in this study is to maintain simplicity and ensure our theoretical motivation is easy to follow, thereby providing clear motivation for our proposed method. Lastly, our experiments also confirmed that our results hold in practice for ReLU activations. [1] Kou, et al. "Benign overfitting for two-layer relu networks." [2] Jelassi et al. "Towards understanding how momentum improves generalization in deep learning." --- We hope that you can revisit your assessment of our work in light of the clarifications provided above. Should there be any other concerns, we are happy to provide further information. --- Rebuttal Comment 1.1: Title: Response to authors Comment: I thank the authors for the hard work spent on this rebuttal. I have some comments about the response: Q1) These were stated as weaknesses, and they indeed are; despite appearing in the limitations, these are some of the disadvantages of the method and therefore should be mentioned in this bullet. I truly appreciate that the authors mentioned these and gave them positive credit. Q2) I understand this tradeoff and the importance of improving worst-case performance. Still, DFR offers a better tradeoff based on your reported results. Nonetheless, I see the value of improving performance on the worst case but suggest that this issue be discussed in the paper. Q3) Following the papers mentioned by the authors I see additional datasets including MultiNLI, CMNIST, and Adult-Confounded. Q4) Noted; better to mention this. To conclude, the authors have addressed most of my concerns and I decided to raise my score. --- Reply to Comment 1.1.1: Title: Thank you! Comment: Thank you for your positive feedback and increasing the score! We'll be sure to incorporate all the suggested changes into our final version. Regarding the additional datasets, we will add experiments on the CMNIST and MultiNLI datasets in our upcoming revision. We'll update the experimental results here if time allows.
Summary: The paper works on understanding and mitigating the impacts on spurious features. Specifically, the authors provide theoretical analysis on the learning process of a non-linear two-layer CNN, under spurious features. Theoretical insights reveal the need to start with balanced data, and progressively expand the train set. The proposed PDE is right based on such insights, and is demonstrated to be efficient and effective on certain datasets. Strengths: * (1) The authors provide theoretical insights on the learning process of a two-layer non-linear CNN under spurious features, which aligns with empirical observations. * (2) The proposed method starts with group-balanced training set and then gradually injects more data samples to capture the core features. And its effectiveness and efficiency are demonstrated on several popular datasets when compared with exiting methods. * (3) The paper is clearly presented and well organized, and insights/motivations are easy to follow. Weaknesses: Although the proposed PDE is super efficient by referring to existing learning/training based approaches, it would be better to mention/discuss the efficiency of some fine-tuning or post-hoc approaches during the related work section, for example, fine-tune the last layer is sufficient to spurious features [R1], post-hoc adjustment on model prediction improves robustness to spurious features [R2]. References: R1: Last Layer Re-Training is Sufficient for Robustness to Spurious Correlations. [ICLR'23] R2: Distributionally Robust Post-hoc Classifiers under Prior Shifts. [ICLR'23] Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: * (1) The results on the synthetic data is a bit confusing: in table 1, the performance of ERM on worst-group is 0%; besides, PDE has a better worst-group performance than the overall one, which is kind of conterintuitive. * (2) It would be much better if authors could add more discussions on the role and selection/suggestions of certain parameters, i.e., the number of times for dataset expansion and the number of data to be added in each expansion. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 3 good Contribution: 4 excellent Limitations: The authors clearly stated their limitations at the end of paper, * The theoretical analysis rely on simplified model; * Certain hyper-parameters might be crucial in model training: i.e., the number of warmup epochs, the number of times for dataset expansion and the number of data to be added in each expansion. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We're grateful for your strong support and suggestions on our work, for which we have accordingly made modifications to our manuscript. Please find our detailed response below for the questions raised in the review. --- **Q1**: It would be better to mention/discuss the efficiency of some fine-tuning or post-hoc approaches during the related work section. **A1**: We thank the reviewer for pointing out the post-hoc approaches and we will add a discussion in the related work section in our revision. Briefly, we believe that fine-tuning approaches [R1, R2] that re-trains the last layer are also efficient methods that can effectively mitigate the influence of spurious correlations. Nevertheless, these methods still require training a model first using ERM in the first stage and finetune the last layer in the second stage. For example, on the Waterbirds dataset, the model is first trained for 300 epochs on the entire training dataset using ERM and further finetuned in the second stage. As we train PDE with no more than 9 epochs as compared to ERM, we believe PDE can be more efficient since PDE also does not require to further finetune the model. Nevertheless, we believe that for [R1, R2], finetuning the last layer only does not incur significant additional computation to ERM. --- **Q2**: The results on the synthetic data is a bit confusing. **A2**: Thanks for pointing out the inconsistency of PDE’s result in Table 1. We apologize for the oversight in reporting the incorrect worst-group results; the value we presented was actually for the small group accuracy. This has now been corrected, and the table has been updated accordingly in Table 1 of the attached PDF. The 0% accuracy of ERM is accurate. Our synthetic data was specifically designed so that a model only relying on spurious features will completely fail the small-group test data. If the model leans heavily on the spurious feature, its predictions will align with the spurious features that are the opposite to the true labels in the small groups, resulting in worse accuracy than random guessing. Admittedly, the synthetic setting is a simpler and more extreme version of the real data. In real-world datasets, ERM does achieve 45.0% worst-group accuracy on CelebA and 58.2% on CivilComments. --- **Q3**. Add more discussions on the role and selection/suggestions of certain parameters. **A3**. We appreciate the suggestion and added ablation studies on Waterbirds dataset as in Table 4 and Figure 1 of the attached PDF. Namely, our methods are robust within a reasonable range of hyperparameter selections. Meanwhile, there are preferred choices for the hyperparameters. - The number of data added during each expansion cannot be too large as it will harm the performance. As similarly demonstrated in Figure 6 of Appendix A, gradual expansion is important to our method and adding all data at once will result in a worse performance. - There is a necessity to decay the learning rate after the warmup stage, while PDE is relatively robust to the extent of decay. - The number of times for data expansion depends on the early-stopping on the validation data. As in Figure 1 in the PDF, a smaller learning rate will result in more data expansions. We will add the ablation studies and corresponding discussion to the revised version of our work. --- Rebuttal Comment 1.1: Comment: Thanks authors for the detailed responses and ablation studies, which have addressed most of my concerns. After reading the author's rebuttal and the other reviewers' comments, I would like to keep my original score (8: Strong Accept).
Summary: The authors propose a new method called PDE to address spurious features and improve generalization. Building on the existing literature, the authors consider an imbalanced binary classification problem where two types of features coexist: core features and spurious features. The core features can lead to good generalization performance but a classification model may pick up spurious features. To avoid that, the proposed method (1) starts with a downsampled balanced classification as a warm-up, which will not pick up the spurious features, and (2) then gradually increase the imbalance ratio until the full dataset is used for training. The main justification for this two-stage algorithm is that gradient descent with momentum can benefit from the warm-up stage where no spurious features are picked up. Numerous experiments on synthetic data and real data are used to support the proposed method. Strengths: Classification with spurious features is a very practical and relevant problem. The proposed method is easy to implement and new to my knowledge. The experiments result show promising performance compared with alternatives. Overall, the warm-up stage in optimization seems to be an interesting idea. It starts with an "easy" downsampled dataset without data imbalance---which is known to be the culprit of picking up spurious features. Then, in the second stage, we hope to use the full datasets without picking up spurious features by using momentum. The presentation of this paper is clear and theoretical/experimental results are easy to follow. Weaknesses: In my opinion, this paper misses a crucial element in classification with spurious features---that is, the effect of overparametrization. It is known, for example in [1], that there is a very important difference between underparameterization and overparametrization when spurious features are present. It is, therefore, expected to have analyses on the inductive bias of the proposed method and how it may select a superior solution compared with ERM and recent alternatives. Unfortunately, the effect of overparametrization is absent in the analyses, as the number of neurons $J$ does not play a role in the analysis. Besides, while the two-stage optimization idea looks interesting, the analysis is a bit handwaving, so it is unconvincing to me why momentum method is able to keep avoiding spurious features in the second stage. An investigation of inductive bias would make this paper much stronger; perhaps even some in-depth experiments on synthetic datasets would shed lights on the behavior of momentum in terms on avoiding spurious features. Lastly, I think the experimental results look fine, but I am not convinced that the proposed method is better than the recent alternatives. For example, in Table 2, while PDE achieves better accuracy on worst classes, it usually incurs a worse accuracy on average. Since there is a clear tradeoff between majority and minority classes in binary classification, better accuracy for the minority class can be often trivially improved by shifting the decision boundary. It would be more convincing if empirical results show improvements for *all* classes. [1] Sagawa, S., Raghunathan, A., Koh, P. W., and Liang, P. An investigation of why overparameterization exacerbates spurious correlations. In International Conference on Machine Learning, pp. 8346– 8356. PMLR, 2020. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: See the weaknesses section. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: NA Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the feedback. We believe that there are some misunderstandings of the contribution of our paper. In light of our clarifications below, we hope that the reviewer considers increasing their evaluation of our work. --- **Q1**: This paper misses the effect of overparameterization. **A1**: We believe there is a misunderstanding of our results. We'd like to clarify that the CNN considered in our analysis is indeed overparameterized. Specifically, "overparameterization" refers to settings where a model's parameters exceed the data's dimension. In our study, the parameters in our CNNs amount to $J \times d$ which is indeed larger than the data dimension. As similar to previous work [2], the order of $J$ is assumed to be $J=\text{polylog}(d)$ and the model has a mild overparameterization. We will emphasize this point more explicitly in the revision. The fundamental work [1] studies the effect of spurious correlations on underparametrized model compared to overparameterized model, and confirmed that overparametrized models are specifically negatively affected by spurious correlations. Hence, we do not investigate the parameterization effect again but focus specifically on the overparameterized regime. **Based on their initial findings, we aim to further understand what properties of data cause the learning of spurious features and consequently how to avoid it for overparameterized models.** The major focus of PDE is to improve the robustness of current deep learning models against spurious correlations. Hence, we do not investigate the underparameterized models that are less used in practice and proven not affected. We want to highlight that, not only in our analysis, but also all models considered in our synthetic and real experiments are overparameterized. Lastly, as many related theory papers explore different aspects of analysis under the overparameterized setting, we especially focus on the non-linear setting and consider convolutional architectures for the first time. --- **Q2**: It is unconvincing to me why momentum method is able to keep avoiding spurious features in the second stage. **A2**: To show the importance of momentum, we have dedicated line 210-228 for explanation and real data experiments in Figure 4. Briefly, the role of momentum that preserves historical trend of learning is proven in the previous work [2]. We use this theoretical finding to motivate our method. As discussed, the model learns the core feature in the warmup stage. During expansion, the momentum from warmup amplifies the core feature that is also present in the gradients of newly added data. This learning process will then correspond to the second case when model learns core feature discussed in subsection 3.1, and the model can tolerate the imbalanced expansion data and continues learning of core feature. Furthermore in Figure 4, our findings suggest that re-initializing the optimizer, and consequently its historical gradient after the warmup stage, will harm the model performance compared to preserving momentum from the warmup stage. Moreover, we provide more ablation studies on synthetic data in Table 1 in the 1-page PDF. As shown, keeping the momentum from the warmup stage provides effective improvement to the worst-group accuracy of the model. --- **Q3**: Not convinced that PDE is better than the recent alternatives. It would be more convincing if empirical results show improvements for all classes. **A3**: We clarify that **worst-group accuracy is the main objective for studies in spurious correlation, and has been regarded as the main evaluation criteria in prior work**. As evident in prior works [3-6], the focus has consistently been on enhancing worst-group accuracy. A high average accuracy can often be misleading, as it might suggest a dependable predictor while the model relies on spurious correlations. Consequently, the predictor performs remarkably well on larger groups but fail when the spurious correlation is not present or indicate the contrary. We also believe there is a misunderstanding of the evaluation metric. Worst-group accuracy is not computed with regard to the **classes** and a simple "shift of the decision boundary" is not practical. Instead, the **groups** represent varying combinations of spurious signals (backgrounds) and true labels (birds). Small groups are present within all classes, such as waterbirds with land background and landbirds with water background. **They are not a single under-performing class.** Hence, class accuracy is similar to average accuracy and doesn't indicate reliability in this context. We are happy to include an additional evaluation metric here. While a high average accuracy does not indicate a reliable predictor but can in fact be alarming, we include here another evaluation metric as per prior literature [7]. This metric measures the gap between the worst-group and average accuracy, where a smaller gap indicates a more robust model that's potentially less reliant on the spurious correlation. The results are presented in Table 3 in the PDF. --- [1] Sagawa et al. "An investigation of why overparameterization exacerbates spurious correlations." [2] Jelassi et al. "Towards understanding how momentum improves generalization in deep learning." [3] Liu et al. "Just train twice: Improving group robustness without training group information." [4] Creager et all. "Environment inference for invariant learning." [5] Zhang et al. "Correct-N-Contrast: a Contrastive Approach for Improving Robustness to Spurious Correlations." [6] Kirichenko et al. "Last Layer Re-Training is Sufficient for Robustness to Spurious Correlations." [7] Haghtalab et al. "On-demand sampling: Learning optimally from multiple distributions." We hope that you can re-evaluate our work based on the clarifications above. If any concerns have remained unaddressed, we appreciate it if you let us know so that we can provide further details. --- Rebuttal Comment 1.1: Comment: I have read the replies by the authors and other reviewers. I appreciate the detailed explanations and references. While I feel a bit surprised by the general positivity from other reviewers, I do respect the consensus here and so I increased the score by 1. --- Reply to Comment 1.1.1: Comment: Thank you for your response and raising your evaluation. We would like to ask if there are still questions or concerns that lead to borderline rejection and we would be happy to discuss and address them as soon as possible.
Rebuttal 1: Rebuttal: We sincerely thank the reviewers for the feedback and especially the detailed suggestions that help us improve our manuscript. We appreciate that many reviewers commend our paper for its clarity, the alignment of our theoretical insights with empirical findings, the simplicity and novelty of our method, and the effectiveness of our method shown in experiments. In response to the questions raised, we've addressed each review with itemized answers. The updated additional tables and figures are organized into the 1-page PDF that we attach here. We hope this provides clarity and resolves any potential misunderstanding. Should there be additional comments or questions, we are more than happy to discuss them. Pdf: /pdf/292a2b63948a0a2df264c1496be71aa8450e67fe.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Incentives in Federated Learning: Equilibria, Dynamics, and Mechanisms for Welfare Maximization
Accept (poster)
Summary: The paper shows that when every agent has a known concave utility function, the Nash equilibrium exists and can be reached independently via best response updates. Moreover, when all agents have linear costs, the proposed budget mechanism (which modifies the utility function) will lead to Nash equilibrium that maximizes p-mean welfare and fairness across agents. Strengths: 1. The differences with existing works are clearly highlighted. The paper novelly considers (i) the unconstrained setting where agents maximize their concave net utility functions, (ii) optimizing the p-mean welfare of the Nash equilibrium and (iii) the derivation of $\beta^*$. 2. Most of the claims and assumptions of the paper (e.g., concave payoff function, convex cost functions) are well justified. 3. The paper is well-written and easy to follow. The implications described after the theorems aid understanding. Weaknesses: 1. The paper makes two assumptions: a) the cost and payoff functions are common knowledge, and b) any agent utility (payoff and cost) only depends on the number of samples contributed. These assumptions limit the applicability/significance of the work and should be better justified. For a), Line 317 proposes that the server can train a public accuracy function and broadcast it to all agents. However, in many FL applications, the server may lack data to do so, and clients may be unwilling to share due to the cost incurred. Moreover, the agents may disagree on the relative weightage of the cost/utility function (e.g., how much monetary cost should one unit of payoff function improvement be worth?) For b), different subsets will lead to models with different performance and costs (e.g., lower if an agent only clones data or one agent may have noisier data). Further justifications (e.g., a trusted data sharing platform ensuring that agent’s i. (or i and j) data is sampled i.i.d) would help. 2. In Sec 4, the definition of $u_i$ is unclear (re-defined on Line 260; 297) and is not explicitly expressed as a function of $p_i$ and $\beta^*$. The descriptions can be improved. My interpretation is that $p_i$ acts as a cost subsidy; hence $u_i = a_i - (c_i - p_i)$ and $(b_i + c_i - p_i = B_i)$. From Theorem 4.1 onwards, $p_i$ uses $\beta^*$ instead of any $\beta$. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. Theorem 1 states that the NE using $\beta^*$ will maximize the $p$-mean welfare for any $p ≤ 1$. Does that require the maximizer of the p-mean welfare for every $p ≤ 1$ to be the same? If so, why and how? One potential counterexample with 2 agents is: out of the utility sample vectors (2,2) and (1,4), (2,2) has higher egalitarian welfare while (1,4) have higher utilitarian welfare. Minor suggestions (not affecting scores) * Line 217: explain that $||g(.)||_2 = 0$ at a Nash equilibrium * Line 255: $c_i$ is used as both the cost function and the per unit cost. The latter should use another notation for less ambiguity. * Equation after Line 264: add brackets for the summation * Line 271: spelling ‘defferred’ * Line 289 can be clearer: the maximization problem can be converted to a convex minimization problem. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 3 good Contribution: 4 excellent Limitations: The conclusion briefly identifies the two limitations: 1) the assumption that cost and payoff functions are common knowledge and 2) agents may not report truthfully. The limitations (and weakness 1) can be further elaborated on in the appendix. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for the comments, suggestions and questions. **Weakness 1:** The paper makes two assumptions: a) the cost and payoff functions are common knowledge, and b) any agent utility (payoff and cost) only depends on the number of samples contributed. These assumptions limit the applicability/significance of the work and should be better justified. For a), Line 317 proposes that the server can train a public accuracy function and broadcast it to all agents. However, in many FL applications, the server may lack data to do so, and clients may be unwilling to share due to the cost incurred. Moreover, the agents may disagree on the relative weightage of the cost/utility function (e.g., how much monetary cost should one unit of payoff function improvement be worth?) For b), different subsets will lead to models with different performance and costs (e.g., lower if an agent only clones data or one agent may have noisier data). Further justifications (e.g., a trusted data sharing platform ensuring that agent’s i. (or i and j) data is sampled i.i.d) would help. **Response:** We will add further justifications on the assumptions. We believe this will help improve the paper, and we thank you for the suggestions. a) We agree that we require costs to be publicly known, or at least verifiable by the mechanism designer. This is a common assumption made in previous work, e.g. Karimireddy et al [1]. Indeed, costs are common knowledge in many real-world applications e.g. (1) in many ML applications, each agent derives their training data from manually labeling a subset of a publicly available dataset like CIFAR or ImageNet, and the cost of labeling dataset is usually known; (2) in autonomous driving, where each data point is a random path taken under random external conditions. Studying incentives when costs are not public is an interesting direction for future work. Secondly you raise a great point about comparing the scales of cost and utility. However enterprises can (and often need to) correlate the model accuracy and their revenue. For e.g. improved accuracy of a diagnostic test in a hospital via FL will lead to reduced costs in terms of time, salary, and labor of the hospital employees, all of which can be quantified. Likewise for examples (1) and (2) above. b) This is an excellent point, and studying the model where agent utilities depend on the shared data set (and not just the size of the data set) is an interesting question for future work. Our goal is to make progress on the setting initiated by [1] and [2] where utility functions of the agents depend only on the amount of data shared. This assumption is justified in several practical scenarios, e.g. (1) and (2) above, and we will emphasize this in the paper by incorporating the suggested justification. Thank you so much! **Weakness 2:** In Sec 4, the definition of $u_i$ is unclear (re-defined on Line 260; 297) and is not explicitly expressed as a function of $p_i$ and $\beta*$. The descriptions can be improved. My interpretation is that $p_i$ acts as a cost subsidy; hence $u_i = a_i - c_i + p_i$ and $b_i + c_i - p_i = B_i$. From Theorem 4.1 onwards, $p_i$ uses $\beta^*$ instead of any $\beta$. **Response:** Your interpretation of the subsidy $p_i$ is right. We will improve our exposition to make the definitions and intuitive interpretations clearer. **Question:** Theorem 1 states that the NE using will maximize the $p$-mean welfare for any $p\le 1$. Does that require the maximizer of the p-mean welfare for every $p$ to be the same? If so, why and how? One potential counterexample with 2 agents is: out of the utility sample vectors (2,2) and (1,4), (2,2) has higher egalitarian welfare while (1,4) have higher utilitarian welfare. **Response:** Our Theorem 1 shows that our mechanism always admits a NE. Secondly, it shows that whenever agents contribute positively at NE $s^*$ (i.e. $s^* > 0$), then $s^*$ also maximizes the $p$-mean welfare among all feasible utility vectors arising from strictly positive data contribution vectors – call the latter set $U^+$. This implies that there is one point in $U^+$ that (weakly) dominates all the other points in $U^+$. Therefore, in your example there should exist a utility vector that dominates both (2,2) and (1,4) and hence that will be the NE as well as p-mean welfare maximizer. We will clarify this in the final version. References: [1] Karimireddy et al. “Mechanisms that Incentivize Data Sharing in Federated Learning”, NeurIPS 2022 FL Workshop. [2] Blum et al. “One for One, or All for All: Equilibria and Optimality of Collaboration in Federated Learning”, ICML 2021. --- Rebuttal Comment 1.1: Comment: Thank you for the clarification and further justification on the assumptions. I acknowledge that I have read the individual rebuttals and your global response.
Summary: This paper studies a collaborative Federated Learning framework. Specifically, the authors proposed a utility model, which is the agent’s payoff function minus the data sharing cost function. Under the assumption that the payoff function is concave and the cost function is convex, the authors show the existence of NE. In addition, they proposed a budget-balanced mechanism, which involves payments to the agents, to maximize the agents’ utilities at NE. Experimental results are provided to demonstrate the proposed algorithm's performance, compared with the previous work FedAvg by Blum et al. [2021]. Strengths: Overall, I found the paper to be well-written, and if you take the model as given, the paper gives a fairly satisfying and complete first investigation – the questions asked are exactly the ones I would hope are answered first. Weaknesses: One major weakness of the paper lies in the utility model proposed. While the paper's key idea of explicitly defining the data-sharing cost is illuminative and contributes to addressing the existence question prevalent in the literature, the utility model itself raises concerns. The critical assumption made in the model is that an agent cannot achieve better utility on its own, even when the cost is zero. Additionally, the assumption suggests that collaboration with the crowd leads to a better utility at the equilibrium state, as opposed to the option of an agent choosing to quit after some rounds of collaboration. This assumption poses limitations and potentially oversimplifies the complexities of real-world scenarios. It overlooks the possibility that an agent might achieve better utility by pursuing an independent course of action, disregarding the collaborative approach altogether. By not considering the potential benefits an agent could gain from individual efforts, the utility model fails to capture the nuanced dynamics of decision-making and potential trade-offs that exist in real-world situations. Therefore, this utility model undermines the paper's overall conclusions and hinders its practical applicability. I would have liked a more concrete/precise discussion of the motivation for the utility model, for example, what if the cost is too high and the agent prefers to quit the collaboration? Technical Quality: 3 good Clarity: 3 good Questions for Authors: Can you provide more justification for why the agents always prefer to participate in the collaboration, even if the data-sharing cost is zero when the agent quits the collaboration? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: N/A. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the detailed comments. **Question:** Can you provide more justification for why the agents always prefer to participate in the collaboration, even if the data-sharing cost is zero when the agent quits the collaboration? **Response:** In a general federated learning framework an agent may not want to participate due to high data sharing cost. However, our mechanism will subsidize such agents (see Line 262), and therefore they may tend to gain by participating compared to the utility they can achieve on their own. We also remark that federated learning as a paradigm is most useful for agents who cannot get high utility on their own, and will benefit from the collaboration. For instance, agents having individual data sets on which trained models do not generalize well will benefit from models trained on data from other agents. Naturally, if an agent has extremely high cost of data sharing, for e.g. due to privacy laws, and the payment from our mechanism is insufficient, she will not participate in the collaboration. --- Rebuttal Comment 1.1: Comment: Dear Reviewer vczd, We would appreciate if you could acknowledge and/or respond to the authors' rebuttal. Thank you, AC --- Rebuttal Comment 1.2: Comment: I appreciate the authors' justification, I'm happy to keep my score. I do want to suggest that if the paper is accepted, the author should further consider the practical implications of the results and maybe provide a more thorough discussion in the next version. --- Reply to Comment 1.2.1: Comment: We will definitely add more discussions on the practical implications in the final version. Thank you very much!
Summary: The paper proposes a simple and elegant mechanism for incentivizing data sharing in FL. The mechanism is budget balanced, and any p-mean welfare is maximized at Nash equilibrium. We show the existence of Nash equilibrium (NE) under mild assumptions on agents' payoff and costs. They also show that agents can discover the NE via best response dynamics. In addition, they introduce a FL protocol FedBR-BG based on the budget-balanced mechanism, utilizing best response dynamics. They empirically validate that FedBR-BG outperforms the basic best-response-based protocol without additional incentivization, as well as the standard federated learning protocol FedAvg, in terms of achieving p-mean welfare. Strengths: The paper introduces public-good-provision approaches to the problem of incentivizing data sharing in FL. They propose a simple and elegant mechanism that achieves desirable properties (based on Falkinger et. al, 2000). The theoretical results are elegant and solid. The presentation is crystal clear. The results do not seem to rely on any specific feature of FL (not sure whether this is an advantage or disadvantage). There is one result that seems novel and surprising, which is the characterization of the optimal \beta^* (Definition 2) and the fact that it optimizes all p-mean welfare. The proof for the optimality of \beta^* is non-trivial. My evaluation will highly depend on the novelty of this specific result. Weaknesses: The mechanism seems to be largely based on the previous literature on the public good provision, especially (Falkinger et. al 2000). The mechanism is basically the same as the one in (Falkinger et. al 2000). The comparison to the existing literature is extremely inadequate. (I didn't see a related work section and the references in the intro barely cover the large literature.) There is a large literature on incentive mechanisms for FL, see [1] for example. [1] Tu et al., 2021, "Incentive Mechanisms for Federated Learning: From Economic and Game Theoretic Perspective" Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: Is the derivation of \beta^* entirely novel, or does it build upon prior work? If it is a novel derivation, why not frame the paper as a study on public good provision? The outcomes seem to have broader implications beyond FL. Does \beta^* depend on the value of p? Falkinger et. al (2000) seem to suggest that \beta = 1-1/n is optimal for p=1. Does that mean \beta=1-1/n is optimal for all p<= 1? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your comments and questions. **Weakness 1:** The mechanism seems to be largely based on the previous literature on the public good provision, especially (Falkinger et. al 2000). The mechanism is basically the same as the one in (Falkinger et. al 2000). **Response:** Indeed our mechanism is inspired from Falkinger et al. (2020). The latter assume a simpler model of public goods provision, whereas our model is more general. In Falkinger et al. (2020) (see also [1]), each agent $i$ has a fixed income $B_i$ which is split between private consumption $b_i$ and contribution to the public good $g_i$, and her utility $u_i$ is a function of $b_i$ and $G = \sum_k g_k$, the total monetary contribution to the public good. In our model, agent $i$ spends part of her budget towards the cost $c_i(s_i)$ of sharing $s_i$ data points while $b_i = B_i - c_i(s_i)$ is unspent. Her utility then depends on $b_i$ and $s = \sum_k s_k$, the total data contribution. Our model thus subsumes that of Falkinger et al. when $c_i(s_i) = s_i$ for all $i$. As far as we know, this intimate connection between FL and public goods provisioning has not been explored before, and is novel. Furthermore, imposing tax/subsidy per agent in our mechanism and that of Falkinger et al. is a common technique used in many other mechanisms for public goods, for eg. see [2]. Our novel contribution is the derivation of the optimal parameter $\beta^*$ for a more general model. **Weakness 2:** The comparison to the existing literature is extremely inadequate. (I didn't see a related work section and the references in the intro barely cover the large literature.) There is a large literature on incentive mechanisms for FL, see [1] for example. [1] Tu et al., 2021, "Incentive Mechanisms for Federated Learning: From Economic and Game Theoretic Perspective" **Response:** Thank you for the suggestion. We will add a related work section with detailed discussion of prior works on incentives in FL, including the suggested reference. **Question 1:** Is the derivation of $\beta^*$ entirely novel, or does it build upon prior work? If it is a novel derivation, why not frame the paper as a study on public good provision? The outcomes seem to have broader implications beyond FL. **Response:** The derivation of $\beta^*$ is novel, as our model is more general than that of Falkinger et al. We were motivated to address the specific issue of data sharing incentives in FL, but the paper could indeed be more broadly framed as a study on public good provision. We will mention the same in the final version. Again, thank you for the suggestion. **Question 2:** Does $\beta^*$ depend on the value of $p$? Falkinger et. al (2000) seems to suggest that $\beta = 1-1/n$ is optimal for $p=1$. Does that mean $\beta=1-1/n$ is optimal for all $p \le 1$? **Response:** We note that $\beta = 1-1/n$ is optimal only when $c_i(s_i) = s_i$ for all $i$ (the result of Falkinger et al.). In general, $\beta^*$ is given by Lemma 6, and depends on the cost functions $c_i$ of the agents, and not on $p$. Our main result has two important elements. First, we show that our mechanism always admits a Nash equilibrium. The second has a technical subtlety: whenever the agents contribute positively at a NE, say $s^*$ (thus $s^* > 0$), then $s^*$ also maximizes the $p$-mean welfare for $p \le 1$ among all utility vectors corresponding to sample vectors where all agents contribute positively. Thus the value $\beta^*$ is optimal for all $p \le1$ whenever the NE has every agent contributing positively. In our specific application, the assumption of “every agent contributing positively” makes sense since an agent can not participate without contributing data in the sense that the server will not communicate with an agent who does not respond! References: [1] Falkinger. "Efficient Private Provision of Public Goods by Rewarding Deviations from Average." Journal of Public Economics, 1996. [2] Andreoni and Bergstrom. “Do Government Subsidies Increase the Private Supply of Public Goods?”, Public Choice, 1996. --- Rebuttal Comment 1.1: Comment: Dear Reviewer ntpc, We would appreciate if you could acknowledge and/or respond to the authors' rebuttal. Thank you, AC --- Rebuttal Comment 1.2: Comment: Thank you for the clarification! I am happy with the response.
Summary: The authors study the Nash equilibrium in federated learning, under the assumption of concave utility and convex cost, and design welfare maximizing payment for users. Experiments are conducted to verify the mechanism. Strengths: The topic is of important value, and the paper is well written. Weaknesses: My major concern is technical novelty and contribution. 1. The analysis of equilibrium and best response dynamic is standard, especially under the assumptions such as convex cost and concave utilities. 2. Though the authors discuss the definition of fairness and welfare maximization, there is lack of discussion on literature about welfare maximization in federated learning, let alone fair subsidies in mechanism design. 3. The design of the payment seems to require the knowledge of the true cost of every user, which is not practical and incentive comptible. 4. The performance of fairness (in Table 1) is compared with methods that are not designed to maximize welfare, which is not meaningful. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. More details on the payment (knowledge required, performance comparison with state-of-the-art mechanism) would help. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: Not applicable. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your comments and questions. **Weakness 1:** The analysis of equilibrium and best response dynamic is standard, especially under the assumptions such as convex cost and concave utilities. **Response:** Using fixed point theorems to prove the existence of Nash equilibrium is a widely used technique in many settings, including under our assumptions of concave payoffs and convex costs. Although these assumptions may seem restrictive, we show that NE need not exist when we go even slightly beyond the concave/convex regime: Theorem A.2 shows an example where no Nash equilibrium exists for agents with non-concave payoffs and linear costs (also stated in see Line 206). **Weakness 2:** Though the authors discuss the definition of fairness and welfare maximization, there is lack of discussion on literature about welfare maximization in federated learning, let alone fair subsidies in mechanism design. **Response:** We will add a thorough discussion of prior literature studying welfare maximization in FL. Typically, FL methods compute a model which maximizes some weighted sum of a function of agent’s accuracies; e.g. FedAvg, AFL [1], q-FFL [2]. However these methods ignore data sharing costs and assume agents honestly contribute their available data. Unlike our work, they do not consider the strategic aspects arising from the cost of data sharing. **Weakness 3:** The design of the payment seems to require the knowledge of the true cost of every user, which is not practical and incentive compatible. **Response:** We agree that we require costs to be publicly known, or at least verifiable by the mechanism designer. This is a standard assumption made in previous works, e.g. Karimireddy et al [3], and Blum et al. [4] (who assume data sharing cost is known and is linear). It seems, costs are common knowledge in many applications e.g. (1) in many ML applications, each agent derives their training data from manually labeling a subset of a publicly available dataset like CIFAR or ImageNet (2) in autonomous driving, where each data point is a random path taken under random external conditions. Studying incentives when costs are not public is an interesting direction for future work. **Weakness 4:** The performance of fairness (in Table 1) is compared with methods that are not designed to maximize welfare, which is not meaningful. **Response:** Table 1 compares the p-mean welfare of our methods, and not fairness. We compare three distributed FL protocols: FedAvg (where agents are not strategic and contribute all data), FedBR (where agents are strategic but there are no payments), and FedBR-BG (our budget balanced mechanism with payments for strategic agents). Both FedBR and FedBR-BG admit Nash equilibria (are fair), while FedAvg ignores strategic aspects of data sharing. We observe from Table 1 that: (1) FedAvg has lower welfare (i.e. accuracy/payoff minus cost), since agents contribute all data even if it is costly. (2) When agents are strategic about sharing data but there are no payments, Nash equilibria may not maximize welfare (see the FedBR column of Table 1), and some agents may contribute zero data points (see Example 1 in Sec. 4). (3) FedBR-BG obtains the highest welfare, thus experimentally supporting our result proving that our mechanism with payments maximizes the welfare of the agents at Nash equilibria. We also compare with a recent baseline MWFed in [4] on MNIST. MWFed is an adaptation of FedAvg, where the agents adjust their contributions based on the accuracy of the received model. We show that our mechanism outperforms MWFed in terms of p-mean welfare, empirically verifying our theoretical results. Specifically, the p-mean welfare for MWFed is 21.648 (p=0.6), 8.803 (p=0.8), and 5.203 (p=1), in contrast with 24.784 (p=0.6), 9.495 (p=0.8), and 5.340 (p=1) for FedBR, and 25.430 (p=0.6), 9.708 (p=0.8), and 5.459 (p=1) for FedBR-BG. **Question:** More details on the payment (knowledge required, performance comparison with state-of-the-art mechanism) would help. **Response:** Thanks for the suggestions for improving this paper. We will include the explanations mentioned above, including (1) details about costs being known, (2) thorough literature survey and comparison to existing methods which maximize welfare but ignore data sharing costs and strategic aspects. As suggested, we include new experimental results, comparing our method with a recent baseline MWFed of [4], and observe that our mechanism outperforms MWFed in terms of p mean welfare (please refer to the response to weakness 4 above). We also remark that there is no “state-of-the-art” mechanism for a direct comparison. For instance, Karimireddy et al. [3] compare their mechanism against different self-proposed baseline mechanisms inspired from fairness notions, e.g. proportional data mechanism (which returns to an agent i a model that gives a payoff proportional to number of data points contributed by agent i) (see Appendix C.2 of [3]). References: [1] Mohri et al. “Agnostic Federated Learning”, ICML 2019. [2] Li et al. “Fair Resource Allocation in Federated Learning”, ICLR 2020. [3] Karimireddy et al. “Mechanisms that Incentivize Data Sharing in Federated Learning”, NeurIPS 2022 FL Workshop. [4] Blum et al. “One for One, or All for All: Equilibria and Optimality of Collaboration in Federated Learning”, ICML 2021. --- Rebuttal Comment 1.1: Comment: Dear Reviewer t7td, We would appreciate if you could acknowledge and/or respond to the authors' rebuttal. Thank you, AC --- Rebuttal Comment 1.2: Comment: I thank the reviewer for the response. I found the response to weakness 3 and the questions is convincing. I am raising my rating to 5. I suggest authors to incorporate these discussions in the revised manuscript. --- Reply to Comment 1.2.1: Comment: Thank you very much! We will incorporate the discussion in the final version.
Rebuttal 1: Rebuttal: We thank the reviewers for their time, suggestions and questions that we believe will improve the quality of the paper. Below we summarize our overall response to the reviewer’s questions and comments. - We discuss our assumption of agent costs being common knowledge. We borrow this assumption from the significant previous works, which seems to be justified in many applications. We described these in detail in our response to Reviewers t7td and KoDx. Thank you for bringing this up, we will add discussion to this end in our final version. - We will add a discussion comparing other related work studying welfare maximization and incentives in FL, as suggested by Reviewers t7td and NtpC. In summary, FL methods typically compute a model which maximizes some weighted sum of a function of agent’s accuracies. However these methods ignore data sharing costs and assume agents honestly contribute their available data. Unlike our work, they do not consider the strategic aspects arising from the cost of data sharing. - We will clarify the novelty about our mechanism and the statement of Theorem 4.1, as suggested by Reviewers t7td, NtpC and KoDx. The derivation of the optimal parameter $\beta^*$ in our mechanism is novel. Moreover, Theorem 4.1 states that the mechanism admits a Nash equilibrium (NE), and whenever agents contribute positively at NE, it also maximizes the p-mean welfare among all utility vectors corresponding to positive sample vectors, for $p\le 1$. - We perform a new experimental comparison with a relatively recent baseline MWFed from [1]. Our new experiments also show that our mechanism outperforms MWFed in terms of $p$-mean welfare, further supporting our theoretical results. Please refer to the response to Reviewer t7td for the new empirical results. References: [1] Blum et al. “One for One, or All for All: Equilibria and Optimality of Collaboration in Federated Learning”, ICML 2021.
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Loss Decoupling for Task-Agnostic Continual Learning
Accept (poster)
Summary: This paper investigates the stability-plasticity tradeoff problem in a task-agnostic continual learning (CL) setting by decoupling the loss of the new task (LODE). LODE introduces two separate objectives for the new task: new/old class distinction and new class distinction. The authors analyze the impact of these objectives on forgetting and propose a strategy that assigns different weights to achieve a better stability-plasticity tradeoff. The proposed method is extensively evaluated through experiments and ablation studies on CIFAR10, CIFAR100, and TinyImageNet, demonstrating its effectiveness. Strengths: - The proposed idea is simple and interesting, offering a promising approach to improve the stability-plasticity tradeoff. It is likely to gain attention within the CL community. - The submission is well-written and easy to follow. - The experimental settings and comparisons are detailed, considering both offline and online settings. Weaknesses: - The experiments only employ ResNet18; it would be beneficial to include a larger model for evaluation. - The paper lacks a description of the learning rate. Is it the same for all compared methods? Was a grid-search conducted to determine it? - The use of TSNE visualization could provide a better way to validate the assumptions made. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - Can LODE be applied in an exemplar-free setting? It would be helpful to have a discussion or a simple experiment addressing this. - The proposed method is only combined with similar replay methods (ER, DER, and ESMER). More combinations or a discussion of this limitation would be valuable. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Q1: The experiments only employ ResNet18; it would be beneficial to include a larger model for evaluation.** **A1**: Our experiments are built on top of the mammoth [3] continual learning repository in PyTorch like many existing works [2,3]. This repository consists of only ResNet18 and a small MLP for all the experiments. To make our experiments convincing, we do not change the model architecture. CL with larger models is also very interesting, but this is not the focus of this paper. We leave it for future study. Thanks for the suggestion. **Q2: The paper lacks a description of the learning rate. Is it the same for all compared methods? Was a grid-search conducted to determine it?** **A2**: Yes, the learning rate is determined by a grid-search. Specifically, the open-source code of the existing work DER has conducted a grid-search to determine hyperparameters, including the learning rate, for a wide range of baselines. Since our experimental setting follows this work, the hyperparameters, including the learning rate, are kept consistent with the code of this work. For the baselines that are not included in the code of DER but follow the experimental settings of DER, we follow their open-source code to set the hyperparameters, including the learning rate, which is also determined by a grid-search. For the methods not following the experimental setting of DER, we conduct a grid-search on validation datasets to determine the hyperparameters, including the learning rate. When combining our LODE with other methods, we keep the learning rate consistent with the original methods. We will add the description of the learning rate in the final version of our paper. Thanks for the suggestion. **Q3: The use of TSNE visualization could provide a better way to validate the assumptions made.** **A3**: We present the TSNE visualization in Figure III of the uploaded PDF. As we can see, removing loss $l_n(\cdot)$ makes the representation of the old classes overlap less than removing loss $l_{ce}(\cdot;C_n)$. Therefore, removing loss $l_n(\cdot)$ makes the model suffer from less forgetting than removing loss $l_{ce}(\cdot;C_n)$. We will add this figure in the final version of our paper. Thanks for the suggestion. **Q4: Can LODE be applied in an exemplar-free setting? It would be helpful to have a discussion or a simple experiment addressing this.** **A4**: YYes, LODE can be applied in an exemplar-free setting. We apply our LODE to a popular exemplar-free continual learning method called prototype augmentation and self-supervision (PASS) [1], which is designed for task-agnostic problems and utilizes a cross-entropy loss for learning new tasks. Specifically, we decouple the loss in PASS in the same way as Equation (3) and apply our LODE to it. The following table shows the results of PASS and LODE (PASS). We can find that LODE (PASS) outperforms PASS on Seq-CIFAR100. More results of combining our LODE with exemplar-free methods will be added in the final version of our paper. Thanks for the suggestion. | Methods | ACC (%) |:-|:-: | PASS | 47.45±0.37 | LODE (PASS) | **50.49±0.22** **Q5: The proposed method is only combined with similar replay methods (ER, DER, and ESMER). More combinations or a discussion of this limitation would be valuable.** **A5**: From the methodology perspective, any methods with the loss in the form of Equation (1) can be combined with our methods. We combined with ER, DER, and ESMER since these methods are either state-of-the-art methods or competitive in a wide range of settings, including both offline and online continual learning setting. In **A4**, we present the combination of our method with other continual learning method. We will add the results and further discussion of limitations of the experiments in the final version of our paper. Thanks for the suggestion. [1] Zhu F, Zhang X Y, Wang C, et al. Prototype augmentation and self-supervision for incremental learning. CVPR, 2021. [2] Sarfraz F, Arani E, Zonooz B. Error Sensitivity Modulation based Experience Replay: Mitigating Abrupt Representation Drift in Continual Learning. ICLR, 2023. [3] Buzzega P, Boschini M, Porrello A, et al. Dark experience for general continual learning: a strong, simple baseline. NeurIPS, 2020. --- Rebuttal Comment 1.1: Comment: Thank you for the authors' efforts in the rebuttal. Their response has addressed my concerns, and I will maintain my current rating. --- Reply to Comment 1.1.1: Comment: Thanks a lot for your response. We would like to inquire if you have any other concerns that might require further elaboration. We would be happy to provide additional explanations as needed.
Summary: The paper titled "Loss Decoupling for Continual Learning" addresses the challenge of catastrophic forgetting in continual learning, where a model needs to learn multiple tasks sequentially. The paper focuses on the task-agnostic problem, where task identities are not available during inference. The main contribution of the paper is the proposal of a method called loss decoupling (LODE) for continual learning. LODE separates the learning objectives for a new task by decoupling the loss, allowing for different weights to be assigned to different objectives. Experimental evaluations on multiple datasets demonstrate that LODE outperforms existing state-of-the-art replay-based methods, achieving a better trade-off between stability and plasticity. Strengths: 1. The paper introduces a novel method, LODE, which decouples the learning objectives for a new task in continual learning. This approach offers a fresh perspective on addressing the trade-off between stability and plasticity. The idea of assigning different weights to different objectives based on their impact on forgetting is unique and contributes to the advancement of continual learning methods. 2. The paper demonstrates a high level of quality in terms of experimental design, clear presentation of results, and thorough analysis. The use of popular datasets, the comparison with state-of-the-art methods, and the inclusion of multiple runs with different seeds enhance the reliability and validity of the experimental findings. The clarity of the paper's structure and explanations contributes to its overall quality. 3. The experimental results demonstrate the effectiveness of LODE in outperforming existing state-of-the-art methods, which has practical implications for enhancing continual learning algorithms and facilitating real-world applications. Weaknesses: 1. The paper focuses solely on the task-agnostic problem and does not explore the separation of learning objectives in other continual learning problems. It would be valuable for future work to investigate the application of loss decoupling to different problem settings, thereby broadening the scope and applicability of the proposed method. 2. Lack of comparison with other loss decoupling methods [1][2]: While the paper proposes LODE as a novel method, it does not compare LODE with other loss decoupling techniques, if any exist. Including such a comparison would strengthen the paper's novelty and showcase the advantages of LODE over alternative approaches. 3. The experiments primarily consider image classification tasks, and the paper does not thoroughly explore the effectiveness of LODE for other types of tasks. Conducting experiments and providing insights on the generalizability of LODE to different domains would enhance the paper's impact and practical relevance. 4. The application of the proposed method in continual learning has strong limitations: 1) It must be a replay-based method and satisfy the loss Equation 1; 2) It must be a task-agnostic setting. However, the paper uses a very broad title, covering the entire field of continual learning. [1] Sun, W., Li, Q., Zhang, J., Wang, W., & Geng, Y. A. (2023). Decoupling Learning and Remembering: A Bilevel Memory Framework With Knowledge Projection for Task-Incremental Learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 20186-20195). [2] Zhao, Z., Zhang, Z., Tan, X., Liu, J., Qu, Y., Xie, Y., & Ma, L. (2023). Rethinking Gradient Projection Continual Learning: Stability/Plasticity Feature Space Decoupling. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 3718-3727). Technical Quality: 3 good Clarity: 2 fair Questions for Authors: 1. Could you elaborate on how the proposed LODE method can be applied to other continual learning problems beyond the task-agnostic setting? What are the potential challenges and considerations in extending LODE to different problem formulations? 2. Can you provide further insights or examples regarding the generalizability of LODE to other types of tasks, apart from image classification? 3. Regarding the limitation of focusing on the task-agnostic problem, have you identified any potential strategies or directions for separating learning objectives in other types of continual learning problems? If not, what are the challenges or considerations in extending loss decoupling to task-aware problems? 4. How do you get this Figure1 (b)? Is this based on some specific data set or an ideal description? What does the y-axis represent? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: / Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Q1: The paper focuses solely on the task-agnostic problem. ... investigate the application of loss decoupling to different problem settings.** **A1**: Task-agnostic problem we consider in this work is a challenging problem where task identities are not available during testing. In contrast, for another popular problem in continual learning (CL), called task-aware problem, the model can obtain task identities during testing. Most CL methods, including the work that the reviewer mentioned, consider only one of them. Specifically, the work mentioned by the reviewer, called bilevel memory system with knowledge projection (BMKP) [1], considers only task-aware problem, as shown in Section 3.2 of their paper and their code. In contrast, our method considers task-agnostic problem, which is more challenging than task-aware problem. In **A6**, we give the insight about broadening the applicability of our proposed method to different problem settings, which will be our future work. Thanks for the suggestion. **Q2: Lack of comparison with other loss decoupling methods [1][2].** **A2**: We do not compare with the methods that the reviewer mentioned since these methods consider task-aware problem while our method considers task-agnostic problem. Specifically, BMKP [1] requires the task identities to choose the corresponding knowledge representations during testing, making it unsuitable in the task-agnostic problem. Space decoupling (SD) [2] does not explicitly mention that it only considers task-aware problem, but its experiments completely follow a task-aware CL method called TRGP [3], indicating its focus on task-aware problem. Furthermore, both these methods are published at CVPR 2023. Since the conference data of CVPR 2023 was after NeurIPS 2023 submission deadline, we were not aware of these two methods when we submitted our paper. We will cite these two methods in the final version of our paper and discuss the difference between our method and them. **Q3: The experiments primarily consider image classification tasks, ... not explore other types of tasks.** **A3**: We primarily consider image classification tasks since many existing methods, including the works the reviewer mentioned [1][2], consider primarily image classification tasks. We have introduced this limitation in Section 6 of our paper, which will be further studied in future work. **Q4: limitations: 1 It must be a replay-based method and satisfy Equation 1; 2 It must be a task-agnostic setting. the paper uses a very broad title.** **A4**: First, replay-based methods are among the most representative CL methods, and most existing state-of-the-art replay-based methods satisfy Equation (1). Second, task-agnostic problem is challenging and many existing replay-based methods focus on this problem. We will change the title to “loss decoupling for task-agnostic continual learning” in the final version of our paper. Thanks for the suggestion. **Q5: provide further insights regarding the generalizability of LODE to other types of tasks?** **A5**: First, we should emphasize that many CL methods, including BMKP and SD mentioned by the reviewer, focus solely on image classification. Second, from the methodology perspective, our methods can be applied to other tasks like NLP and speech data analysis in the task-agnostic problem, where the learning of a new task can be decoupled into learning a new task itself and learning to distinguish between the new task and the old tasks. This is because the main contribution of our method lies in decoupling the loss function which is a general one for modeling different types of input data. **Q6: elaborate how LODE method can be applied to other continual learning problems? identify potential strategies for separating learning objectives in other continual learning problems?** **A6**: We can leverage the semantic information of the classes to separate learning objectives in other types of CL problems including task-aware problems. Taking the task-aware problem as an example, assume the model is going to learn to distinguish three classes, including tiger, wolf and bee. Assume the model has learned to distinguish between cat and dog. Since tiger is similar to cat and wolf is similar to dog semantically, the model may change very little when learning to distinguish between tiger and wolf. Note that changing less is conducive to overcome catastrophic forgetting, we can divide the new task into two learning objectives, where the first objective makes the model distinguish between tiger and wolf, and the second objective makes the model distinguish between other category pairs. We will add this discussion in the final version of our paper. Thanks for the suggestion. **Q7: How do you get Figure 1 (b)? based on some specific data set or an ideal description? What does the y-axis represent?** **A7**: Figure 1 (b) is an idea description. This figure describes the coupling property of existing replay-based methods. Specifically, existing replay-based methods either sample from $\mathcal{D}_t$ and $\mathcal{M}$ separately, or directly sample from $\mathcal{D}_t⋃\mathcal{M}$. Then, they compute the loss for the new task through cross-entropy, which couples different new learning objectives. Figure 1 (b) is used to introduce this phenomenon figuratively. The y-axis represents the model’s abilities, including its ability to adapt itself to the new task (plasticity) and its ability to overcome forgetting (stability). This explanation will be added in the final version. Thanks for the comment. [1] Sun W, Li Q, Zhang J, et al. Decoupling Learning and Remembering: A Bilevel Memory Framework with Knowledge Projection for Task-Incremental Learning. CVPR, 2023. [2] Kim S, Noci L, Orvieto A, et al. Achieving a Better Stability-Plasticity Trade-off via Auxiliary Networks in Continual Learning. CVPR, 2023. [3] Lin S, Yang L, Fan D, et al. TRGP: Trust Region Gradient Projection for Continual Learning. ICLR, 2022. --- Rebuttal Comment 1.1: Title: Response to the authors Comment: I thank the authors for the detailed response, which addresses most of my concerns. I update my rating to Borderline accept. --- Reply to Comment 1.1.1: Comment: Thanks a lot for your response. We would like to inquire if you have any other concerns that might require further elaboration. We would be happy to provide additional explanations as needed.
Summary: Several replay based continual learning methods decouple their loss into a loss based on samples from the replay to maintain performance on old tasks, and a loss based on data from the current task to learn the new task. This paper proposes to further decouple the latter loss into a component that helps the model distinguish whether a sample is in the new task or from previous tasks, and a component helps the model distinguish between just the new classes in the current task. The paper shows that decoupling this loss and giving different weight to the two components helps in maintaining performance and results in better final accuracy for Seq-CIFAR10, Seq-CIFAR100, and Seq-TinyImagenet. Strengths: - The paper proposes an intuitive decoupling of the loss used in many continual learning works. It shows that doing this decoupling is useful, and can improve performance by weighting the different parts of the decoupled loss differently. - The paper’s experiments and presentation of experiments are well done, although I do ask the authors to address some of the questions I present below. The hyperparameter sensitivity analysis is useful in showing that the method is at least somewhat robust to hyperparameter changes. Weaknesses: The main weaknesses are mostly related to the additional questions and experiments I ask for in the next section. There are a few experiments I’d like to see that could make the paper stronger. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - For section 4.2.3, did you do a hyperparameter search for the values of $\beta_1$ and $\beta_2$ for the methods that weren’t yours? - The $l_n$ loss in the denominator is essentially a binary classification loss. Since you are only using positive examples to compute this loss, it increases the value of the logits of the current task and decreases the value of the logits of previous tasks for your current data. Did you try also using some data from the replay buffer for this loss? - For Figure 2, it might be helpful to also see on the graph a line where neither component is removed (ie both components of the decoupled loss without reweighting), and a line with the coupled loss used in traditional ER (ie Equation 2). This could tell us how much the decoupling formulation that is proposed actually corresponds to the usual coupled formulation. Another graph to add could be the accuracy on the new task being trained, allowing us to get a sense of how these losses affect plasticity. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The work presents a basic discussion of limitations and societal impacts. The limitations section could be further developed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Q1: For section 4.2.3, did you do a hyperparameter search for the values of $β_1$ and $β_2$ for the methods that weren’t yours?** **A1**: In Section 4.2.3, we did not do a hyperparameter search for the values of $β_1$ and $β_2$ for the methods that weren’t ours and kept their value consistent with our method. To be more rigorous, we add the hyperparameter search for the values of $β_1$ and $β_2$ for the methods that weren’t ours and present them in the following table. The results lead to the same conclusion as that in Section 4.2.3. We will replace Table 2 with this table in the final version of our paper. Thanks for the suggestion. | LODE (DER++)|Seq-CIFAR10|Seq-CIFAR100 |:-|:-:|:-: | $β_1=C,β_2=ρ\frac{\vert C_n\vert}{\vert C_o\vert}$ | **75.45±0.90** | **46.31±1.01** | $β_1=β_2=ρ\frac{\vert C_n\vert}{\vert C_o\vert}$ | 71.18±0.80 | 37.49±1.79 | $β_1=β_2=C$ | 73.80±0.20 | 42.08±1.71 | $β_1=ρ\frac{\vert C_n\vert}{\vert C_o\vert},β_2=C$ | 73.19±0.15 | 40.79±0.12 | **LODE (ESMER)** | **Seq-CIFAR10** | **Seq-CIFAR100**| | $β_1=C,β_2=ρ\frac{\vert C_n\vert}{\vert C_o\vert}$ | **74.53±0.95** | **55.06±0.35** | | $β_1=β_2=ρ\frac{\vert C_n\vert}{\vert C_o\vert}$ | 73.41±0.40 | 45.64±0.87 | | $β_1=β_2=C$ | 73.08±0.81 | 52.37±0.87 | | $β_1=ρ\frac{\vert C_n\vert}{\vert C_o\vert},β_2=C$ | 72.38±0.24 | 51.86±0.35 | **Q2: The loss $l_n$ in the denominator is essentially a binary classification loss. Since you are only using positive examples to compute this loss, it increases the value of the logits of the current task and decreases the value of the logits of previous tasks for your current data. Did you try also using some data from the replay buffer for this loss?** **A2**: We did not try using any data from replay buffer for this loss since the replay loss $l_{rep}(\cdot)$ has already included the cross-entropy loss $l_{ce}(\cdot)$ computed by the samples of old classes in most replay-based methods. At this time, we can decouple the loss $l_{ce}(\cdot)$ included in $l_{rep}(\cdot)$ in a similar way as described in Equation (3) of the paper: $$l_{ce}(f_Θ(x),y)=-log⁡(\frac{\exp⁡(o_y)}{∑_{i=1}^{m+n}\exp⁡(o_i)})-log⁡(\frac{∑_{i=1}^m \exp⁡(o_i)}{∑_{i=1}^{m+n}\exp⁡(o_i)}).$$ Here, $m$ and $n$ denote the number of old and new classes, respectively. The second term of this equation is the loss $l_n(\cdot)$ computed using the negative examples. Since the replay loss $l_{rep}(\cdot)$ has already included the loss $l_n(\cdot)$ computed using the negative examples, we did not try using some data from the replay buffer for this loss. **Q3: For Figure 2, it might be helpful to also see on the graph a line where neither component is removed (ie both components of the decoupled loss without reweighting), and a line with the coupled loss used in traditional ER (ie Equation 2). This could tell us how much the decoupling formulation that is proposed actually corresponds to the usual coupled formulation. Another graph to add could be the accuracy on the new task being trained, allowing us to get a sense of how these losses affect plasticity.** **A3**: We add the line where neither component is removed in Figure I of the uploaded PDF. As we can see, when neither component is removed from Equation (2), the model forgets either more than or similarly to removing loss $l_{ce}(\cdot;C_n)$. We also add a graph in Figure II of the uploaded PDF, which shows the accuracy of each new task during the training. As we can see, removing either loss $l_{ce}(\cdot;C_n)$ or loss $l_n(\cdot)$ decreases the model’s plasticity and leads to lower performance on the new task. Hence, both of these two losses are necessary for the learning of a new task. We will add these graphs in the final version of our paper. Thanks for the suggestion. [1] Buzzega P, Boschini M, Porrello A, et al. Dark experience for general continual learning: a strong, simple baseline. NeurIPS, 2020. --- Rebuttal Comment 1.1: Comment: I am satisfied with the response, and have raised my score accordingly. --- Reply to Comment 1.1.1: Comment: Thanks a lot for your response. We would like to inquire if you have any other concerns that might require further elaboration. We would be happy to provide additional explanations as needed.
Summary: This paper investigated class incremental continual learning (CL). By showing that two objectives, i.e., classifying among new classes and classifying between new classes and old classes, can have different impacts on the performance of CL, this paper proposed to decouple the new task loss and assign different weights to these two objectives. Experiments are conducted on multiple datasets to verify the effectiveness of the proposed method LODE in comparison with related baseline methods. Strengths: It is interesting to investigate the different impacts inter-task and intra-task classification on the performance of class incremental CL. Weaknesses: 1. The results in Figure 2(a) seems very counter-intuitive. When removing $l_n$, the training of the current task only focuses on the classification within the task and ignores the differentiation from old classes in the memory. In this case, the forgetting should be larger compared to the case of removing $l_{ce}$, while in Figure 2(a) the forgetting was smaller. The authors didn't give a convincing explanation of the underlying reason for the phenomenon. 2. The weight selection (5) in LODE is entirely based on heuristics and manually determined. It is hard to justify the selection of the weights, but the performance of LODE is highly sensitive to the weight selection as shown in Table 2. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: 1. How did you select the values of $C$ and $\rho$ in (5)? 2. Conceptually, the loss decoupling can be used in CL without using data replay. In this way, wouldn't it be more clear to understand the impact of these two parts on the performance of CL? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Q1: The results in Figure 2(a) seem very counter-intuitive. The authors didn't give a convincing explanation of the underlying reason for the phenomenon.** **A1**: The results in Figure 2 (a) of the paper are intuitively reasonable. First, since replay-based methods usually keep limited samples in memory, when the model learns a new task, it has access to much fewer samples from the old classes than from the new classes. Therefore, utilizing loss $l_n(\cdot)$ to learn to distinguish between new classes and old classes introduces a risk of biasing the model towards the new classes, potentially leading to serious catastrophic forgetting [1]. In contrast, loss $l_{ce}(\cdot;C_n)$ is independent of the old classes, thereby avoiding introducing a risk of biasing the model towards the new classes. Second, as shown in Figure 2 (c) of the paper, the value of loss $l_n(\cdot)$ is typically larger than the value of $l_{ce}(\cdot;C_n)$ during training. This finding is consistent with existing work [3], which suggests that larger loss for the new task may result in larger feature drift, leading to more forgetting. Based on these two reasons, it is intuitively reasonable that loss $l_n(\cdot)$ causes larger forgetting for the old classes than loss $l_{ce}(\cdot;C_n)$, implying that removing loss $l_n(\cdot)$ results in less forgetting than removing loss $l_{ce}(\cdot;C_n)$. Thanks for the information. We will add this explanation in the final version of our paper. **Q2: The weight selection (5) in LODE is entirely based on heuristics and manually determined. It is hard to justify the selection of the weights, but the performance of LODE is highly sensitive to the weight selection as shown in Table 2.** **A2**: Actually, the weight selection (5) in LODE is not based on heuristics and manually determined. We give the explanation for Equation (5) in Section 3.3. First, loss $l_n(\cdot)$ aims to distinguish new classes from the old classes and the replay loss $l_{rep}(\cdot)$ in Equation (4) keeps the performance on the old classes. Minimizing losses $l_n(\cdot)$ and $l_{rep}(\cdot)$ together makes the model learn to distinguish whether a given sample belongs to the new classes or the old classes, which is a binary classification task. If we set both the weights of $l_n(\cdot)$ and $l_{rep}(\cdot)$ as constants during the whole learning process, the model may suffer from large forgetting for the old classes. More specifically, when the number of tasks is large and each new task only increases a small number of new classes, the number of old classes is much larger than the number of new classes. At this time, setting the weight of $l_n(\cdot)$ as a constant may introduce a risk of biasing the model towards the new classes. On the contrary, setting the weight of $l_n(\cdot)$ proportional to the ratio $\frac{|C_n|}{|C_o|}$ removes this bias since the weight of $l_n(\cdot)$ will be small when the number of old classes is much larger than the number of new classes. Different from $l_n(\cdot)$, loss $l_{ce}(\cdot;C_n)$ is independent of the old classes and thus it does not introduce a risk of biasing the model towards the new classes. Therefore, we set the weight of $l_{ce}(\cdot;C_n)$ as a constant C to maintain the model’s plasticity. Table 2 verifies the effectiveness of this weight selection strategy. **Q3: How did you select the values of C and ρ in (5)?** **A3**: We keep the value of C as 1 for all the experiments in Table 1 and Table 3, which we find is enough for the model to perform well. We follow existing continual learning method DER [4] to select the value of ρ. Specifically, we select the value of ρ through grid-search on a validation set obtained by sampling 5% of the training set. We investigate the influence of these two hyperparameters in Figure 4. **Q4: Conceptually, the loss decoupling can be used in CL without using data replay. In this way, wouldn't it be more clear to understand the impact of these two parts on the performance of CL?** **A4**: Thanks a lot for this informative comment. Yes, the loss decoupling can be used in some CL methods without using data replay as long as these methods use a standard cross-entropy loss to learn new tasks, which is the common choice for many CL methods. We focus on replay-based methods since they are more effective for task-agnostic problem due to the replaying of old data, particularly in the online continual learning setting. Here, we choose a CL method without using data replay, called prototype augmentation and self-supervision (PASS) [2] to evaluate our methods. The following two tables show the results of PASS on Seq-CIFAR100. The results are consistent with CL methods using data replay. Specifically, the first table shows the variation of the accuracy for the first task during the learning of subsequent tasks. We can find that removing $l_n(\cdot)$ makes the model suffer from less forgetting than removing $l_{ce}(\cdot;C_n)$. Table 2 shows that LODE (PASS) outperforms PASS. We will add these results in the final version of our paper. ||After Task 1|After Task 2|After Task 3|After Task 4|After Task 5 |:-|:-:|:-:|:-:|:-:|:-: |Remove $l_{ce}(\cdot;C_n)$|**87.40±0.22**|54.28±3.41|56.18±4.08|50.77±2.57|43.92±0.87 |Remove $l_n(\cdot)$|**87.40±0.22**|**82.38±0.89**|**63.27±0.75**|**54.18±0.68**|**48.50±0.95** ||ACC (%) |:-|:-: |PASS|47.45±0.37 |LODE (PASS)|**50.49±0.22** [1] Ahn H, Kwak J, Lim S, et al. Ss-il: Separated softmax for incremental learning. ICCV, 2021. [2] Zhu F, Zhang X Y, Wang C, et al. Prototype augmentation and self-supervision for incremental learning. CVPR, 2021. [3] Sarfraz F, Arani E, Zonooz B. Error Sensitivity Modulation based Experience Replay: Mitigating Abrupt Representation Drift in Continual Learning. ICLR, 2023. [4] Buzzega P, Boschini M, Porrello A, et al. Dark experience for general continual learning: a strong, simple baseline. NeurIPS, 2020. --- Rebuttal Comment 1.1: Comment: Thanks a lot for the response. It clarifies most of my questions. I will raise my score to 5. While the current weight selection strategy sounds reasonable, I do think that a more principled way, e.g., through optimization, could further improve the quality of this paper. --- Reply to Comment 1.1.1: Comment: Thanks a lot for your response. Previously, we performed weight selection through optimization. Specifically, we treated the weights of different losses as learnable parameters and formulated the weight selection as a bilevel optimization problem. However, our findings showed that this approach did not give notable improvements compared to using Equation (5) for weight selection. Additionally, it introduced extra computational overhead. If the reviewer could provide a specific reference, method, or even a code related to this, we would greatly appreciate it.
Rebuttal 1: Rebuttal: We appreciate the reviewers' positive comments: 1. The problem is interesting to investigate the different impacts inter-task and intra-task classification on the performance of class incremental CL. (reviewer Y5HQ) 2. The proposed method is intuitive and can improve performance in many continual learning works. / The proposed method contributes to the advancement of continual learning methods. (reviewer odRp, reviewer n7mC) 3. This paper offers a promising approach to improve the stability-plasticity tradeoff. / The paper introduces a novel method and offers a fresh perspective on addressing the trade-off between stability and plasticity. (reviewer Tdag, reviewer n7mC) 4. The submission is well-written and easy to follow. / The clarity of the paper's structure and explanations contributes to its overall quality. (reviewer Tdag, reviewer n7mC) 5. The experimental design demonstrates a high level of quality. / The paper’s experiments and presentation of experiments are well done. / The experimental settings and comparisons are detailed. (reviewer n7mC, reviewer odRp, reviewer Tdag) At the same time, we are grateful for the valuable questions and concerns of all the reviewers and have provided individual responses accordingly. Pdf: /pdf/088df3bad81abaf4d3054f4d5264cfcb409a2c8a.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Make You Better: Reinforcement Learning from Human Gain
Reject
Summary: The authors present a new reinforcement learning objective, dubbed Reinforcement Learning from Human Gain (RLHG), that explicitly incorporates an understanding of human performance with an intervention into the objective function. They show that training with this added component improves outcomes in a MOBA, both on the overall objective of winning and on subobjective related to satisfaction. Strengths: They present an important research problem (ML systems working as collaborators) and make a good attempt at overcoming it. They did experiments with human participants. The domain they trained models in is non-trivial and showing results shows good ability Weaknesses: I don't believe the main claim. The authors say that by explicitly adding their more complex models of human wants/behaviours they can get better performance, but they don't compare against a model that attempts to optimize for those things directly. A perfect RL agent that has correct values for the different objectives should be able to learn the optimal policy without using their proposed more complicated methods. They only test against a model that is trained to win, and a model that is trained on short term value optimization. How does the method compare to another model trained with similar data and objectives? The lack of comparison makes this feel like they shows that PPO and deep learning work, not that there new methods are better. Technical Quality: 3 good Clarity: 3 good Questions for Authors: "compromising AI autonomy" (line 61), what does this mean? Isn't the whole point of this method to shape the agent's actions? Where did you get 10^20000 for the state space? (line 91) You don't need to specify that Honor of Kings is a MOBA and that it is (renowned|complex|popular|etc.) each time you reference it Dec-POMDP is never defined Line 215 "We were authorized" by who? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: The authors do not explain why their decomposition of the loss/training loop is _theoretically_ better than any other, they only provided some _empirical_ evidence to suggest it. The authors only discuss the RL literature in their framing/discussion. In other domains, such as recomender systems, the issue of over optimizing for a single metric to the detriment of other objectives is a major area of research. Consider looking over some data science papers from KDD as a starting point Flag For Ethics Review: ['No ethics review needed.'] Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for carefully reviewing our paper! We greatly appreciate your feedback on our work. We provide clarification below for your questions and concerns. If you have any further questions or comments, we will be happy to discuss them further. ---- **Q1**: Clarification on Weaknesses **A1**: Please allow us to provide further clarity on the RLHG agent and the two baseline agents, i.e., the Wukong agent and the HRE agent, used in the experiment. - The Wukong agent [1] utilizes the PPO algorithm to optimize the Win Rate using environmental (original) rewards. - The HRE agent utilizes the PPO algorithm to optimize both the Win Rate and Human Performance by directly incorporating human goal rewards (i.e., rewards for achieving goals) into the optimization objective. - The RLHG agent utilizes the PPO algorithm to optimize both the Win Rate and Human Performance by incorporating the positive gains of human returns under effective enhancement versus no enhancement into the optimization objective. (1) "they don't compare against a model that attempts to optimize for those things directly" — Both HRE and RLHG use the same human goal reward function during the optimization process. The key difference lies in how these rewards are integrated. HRE directly incorporates human goal rewards into the optimization objective, which can lead to human-agent credit assignment issues, as evidenced by the experimental results presented in Section 4.2. The HRE agent learned many ineffective enhancement behaviors that interfered with its original objective of winning the game, ultimately resulting in a significant decrease in the Win Rate. To address these issues, one potential solution is to carefully reshape the human goal reward function. However, this approach heavily relies on domain knowledge and expertise. RLHG tackles this challenge by introducing a "baseline" (the primitive performance of humans in achieving goals without enhancement) and incorporating the positive gain (the benefit of the human performance under effective enhancement over the "baseline") into the optimization objective. (2) "A perfect RL agent that has correct values for the different objectives should be able to learn the optimal policy without using their proposed more complicated methods." — We totally agree with your viewpoint. In fact, the RLHG method can be seen as introducing a "baseline" that filters out some incorrect values (the human returns from invalid enhancement behaviors and negative enhancement behaviors) for the optimization objective. **Q2**: "compromising AI autonomy" (line 61), what does this mean? **A2**: In our work, an autonomous agent refers to an agent whose behavior is optimized with original rewards. We optimize by finetuning the pre-trained Wukong agent (optimized for winning games) using human goal rewards, and expect the optimized agent to improve the performance of humans in achieving goals while maintaining its autonomy (Win Rate) as much as possible. In the experiment, we found that the HRE agent learned a lot of ineffective enhancement behaviors, which is manifested in the fact that the agent frequently follows humans when it is not necessary, and is very vulnerable to human behavior. These ineffective enhancement behaviors greatly damage the autonomy of the agent, leading to a significant reduction in the Win Rate. In contrast, the RLHG agent only learns from its effective enhancement behaviors, which greatly guarantees the autonomy of the agent (Only a slight drop in Win Rate) while significantly improving the performance of humans to achieve goals. We have included a supplementary description of this in the main text. **Q3**: Where did you get 10^20000 for the state space? **A3**: This data comes from [2], "As for state space, the resolution of Honor of Kings map is 130,000 by 130,000 pixels, and the diameter of each unit is 1,000. At each frame, each unit may have different status such as hit points, levels, and gold. Again, the state space is at magnitude of 10^20,000 with significant simplification." We have included this reference in the main text. **Q4**: Dec-POMDP is never defined. **A4**: Dec-POMDP is shorthand for **Dec**entralized **P**artially **O**bservable **M**arkov **D**ecision **P**rocess. We have included this definition in the main text. **Q5**: "We were authorized" by who? **A5**: We were authorized by **Ye, et al.** to use the Wukong [1] agent and the JueWu-SL [3] agent. We have included this information in the main text. ---- [1] Ye, et al. Towards playing full moba games with deep reinforcement learning. NeurIPS'2020. [2] Wu, et al. Hierarchical macro strategy model for moba game ai. AAAI'2019. [3] Ye, et al. Supervised learning achieves human-level performance in moba games: A case study of honor of kings. TNNLS'2020. --- Rebuttal Comment 1.1: Comment: Thank you for responding thoroughly to my questions. Reading all the responses has answered my questions. Reading your response and those to the other reviewers has not changed my position. I do not believe this is a significant enough contribution. I think this result should be published at another venue if the other minor issues are fixed, in particular increasing the clarity of the writing.
Summary: This paper proposes a new method called Reinforcement Learning from Human Gain (RLHG) to effectively enhance human goal-achievement abilities in collaborative tasks with known human goals. The paper evaluates the RLHG agent in the widely popular Multi-player Online Battle Arena (MOBA) game, Honor of Kings, by conducting experiments in both simulated environments and real-world human-agent tests. Strengths: The problem setting considered by the paper is tightly connected to some real-world problems (e.g., assistive agents in MOBA games). Experiments are performed in a real-world application (Honor of King). Weaknesses: It is difficult to evaluate the major contribution of the paper (the two-step training process) because 1) the major contribution of the paper, as the authors claimed in the paper, is orthogonal to many of the complications in the paper (e.g., multiple agents, multiple goals, partial observability). These complications are not contributions and they make it hard to understand the contribution of the paper clearly. Maybe the authors added them because their experiments are in Honor of Kings? 2) probably because the paper focuses too much on these complications, the paper fails to explain why RLHG is a good idea and provides clear evidence. For example, why do the authors propose estimating primitive human performance rather than primitive human+agent performance? What is making estimating primitive human performance helpful? Can this idea be used in environments without humans? 3) the writing of the paper is unsatisfying. I was completely lost when reading the paper. Please see Questions for my questions about the paper. 4) in experiments, the new method achieved a worse winning rate compared with the baseline method. I can understand this performance drop given that there are improvements regarding other metrics. What should I learn from this indeterminate result? Technical Quality: 2 fair Clarity: 1 poor Questions for Authors: Section 1: "rather than replacing them outright" why would you talk about this? How is this related to the previous text? "The RLHG method can be seen as a plug-in that can be directly utilized to fine-tune any pre-trained agent to be assistive in human enhancement." Do you literally mean ANY pre-trained agent? "human goals". "human goals" sounds too broad and vague. What do you actually mean here? "AI autonomy". ??? "such as human-agent credit assignment issues, i.e., human rewards for achieving goals are assigned to non-assisting agents, which potentially leads the agent to learn poor behaviors and forfeits its autonomy." Why is this an issue? In RL you can have environments that emit rewards even if the agent does nothing. Section 2: Could you explain how the agents interact with the human? Do you consider one human? Do you consider a finite set of global states, actions, and observations? How are global states used? Do you assume that the policy only depends on the current observation? What do ''scenarios'' mean? pi_\theta is not defined, theta is not defined. What do R_t and R^H_t mean? What do Eπθ [G], and EπH [G] mean? It is important to distinguish between definition and equality. In this paper, "=" is used for both purposes. "The agent’s policy gradient can be formulated as". I know what "policy gradient" means in classic RL. But to be honest I am not sure what "policy gradient" means here. How do you derive the following equation? Section 3: What does Equation 2 mean? What does "the probability of exploring each policy" mean? Why do you separate human rewards from the agent's rewards, given that you are going to combine them together eventually? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 1 poor Contribution: 2 fair Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for carefully reviewing our paper! We greatly appreciate your feedback on our work. We provide clarification below for your questions and concerns. If you have any further questions or comments, we will be happy to discuss them further. ---- **Q1**: Clarification on Contributions **A1**: We have reorganized our contributions to enhance their clarity. - We explored approaches to enable (surpass) human-level agents to assist humans in achieving their goals in complex collaborative environments. - Through this exploration, we gained insights into challenges like human-agent credit assignment. This challenge can cause agents to learn many ineffective enhancement behaviors that hinder their original goal of winning the game. - To address this challenge, we proposed the RLHG method and provide a detailed implementation framework. - We conducted human-agent tests on the popular MOBA game Honor of Kings. Both objective metrics and subjective preference results of the participants verified the effectiveness of our proposed method. **Q2**: Clarification on AI Autonomy and Human-Agent Credit Assignment **A2**: In our work, an autonomous agent refers to the pre-trained Wukong agent which is optimized for winning games using environmental (original) rewards. We aim to improve human performance in achieving goals while maintaining the agent's autonomy (Win Rate). However, during the experiment, we found that the HRE agent struggled with credit assignment when collaborating with humans. This led to ineffective enhancement behaviors, such as following humans unnecessarily (even in scenarios where humans can accomplish certain goals independently) or susceptibility to human behaviors. These behaviors significantly impeded the agent's autonomy, resulting in a significant decrease in Win Rate. To address this, one potential solution is to carefully reshape the human goal reward function. However, this approach heavily relies on domain knowledge and expertise. RLHG solves this by introducing a "baseline" (the primitive performance of humans in achieving goals without enhancement) and incorporating the positive gain (the benefit of the human performance under effective enhancement over the "baseline") into the optimization objective. **Q3**: Can this idea be used in environments without humans? **A3**: The RLHG framework can be naturally extended by replacing the human model with other specified surrogate models. However, our research primarily concentrates on human enhancement, which holds greater practical significance. **Q4**:Clarification on Win Rate and Human Metrics **A4**: Firstly, we would like to clarify that the pre-trained Wukong agent used in our experiment is beyond the human level. Secondly, the Wukong agent is trained for winning the game using environmental rewards. In contrast, the RLHG agent introduced an additional optimization objective of enhancing human performance. However, there is a natural trade-off between Win Rate and Human Metrics. Thirdly, our participant survey results show that many participants were more concerned with achieving their individual goals rather than the game result. Our human-agent test results also show that many participants preferred teaming with the RLHG agent, despite its slightly lower Win Rate compared to teaming with the Wukong agent. Finally, we also conducted an ablation study on the balance parameter $\alpha$ in Appendix C.1 and proposed an adaptive adjustment mechanism for better practical application in Appendix C.2. **Q5&A5**: For Section 1 - "rather than replacing them outright" -> It is related to Figure 1 (main text), in which the human wants the coin. At this point, the agent should act as an assistant to help humans pass through, so that the human can obtain the coin, rather than replacing the human to obtain the coin, which would prevent the human from achieving his/her goal. - "Do you literally mean ANY pre-trained agent?" -> Although the RLHG framework does not impose constraints on pre-trained agents, its practical utility may be limited for low-level agents which may not possess the innate ability to assist human players. - "Human goals" -> In our work, it refers to the individual goals that humans want to achieve during the task. **Q6&A6**: For Section 2 - "how the agents interact with the human?" -> Both Figure 7 (main text) and the video (line 201) demonstrate the interaction between the agents and the human. - "Do you consider a finite set of global states, actions, and observations?" -> Please refer to the detailed description of the state and action spaces in Appendix B. - "Do you assume that the policy only depends on the current observation?" -> In Section 3.3, we stated that we used an LSTM module (step size is 16) to process the observed historical information. - "Notation definition" -> We have clarified the notations in the paper to improve its readability, including: $\pi_\theta$ represents the agent's policy network and $\theta$ is the network parameter. $R_t$ and $R^H_t$ denote the original reward and human goal reward at timestep $t$, respectively. $E_{\pi_\theta}[G]$ and $E_{\pi^H}[G]$ represent the expected return based on $\pi_\theta$ and $\pi^H$, respectively. - "Policy gradient" -> We have clarified the agent's policy gradient as: > $g(\theta)=Eo_{0:\infty},a_{0:\infty}[\sum_{t=0}^{\infty}Ea_t^H\sim\pi^H[ A_t + A^H_t ]\nabla_{\theta}log\pi_{\theta}(a_t|o_t,a_t^H)]$ **Q7&A7**: For Section 3 - "Equation 2" -> It expresses that we consider $\pi_\theta$ to be $\pi_{\theta}^{ef}$, $\pi_{\theta}^{in}$, or $\pi_{\theta}^{ne}$, depending on whether the human value function $V_H^{\pi_{\theta}, \pi^H}(s)$ based to the policy $\pi_\theta$ is greater than, equal to, or less than the human primitive value $V_H^{\pi, \pi^H}(s)$. - "Why separate rewards" -> Separating them facilitates a clearer description of each, as our approach primarily focuses on improving the estimation of human enhancement advantages. --- Rebuttal Comment 1.1: Comment: Dear authors, thank you for submitting your response to the comments. Dear Reviewer 5Pqf, were your concerns addressed by the authors? Best, AC
Summary: The authors propose Reinforcement Learning from Human Gain, an RL algorithm that explicitly optimizes for enhancing human abilities in cooperative human-AI settings. Given a predefined set of human goals, the main approach first learns a value network to estimate the primitive human performance at achieving said goals. Then, a secondary Gain network is trained to estimate the enhancement the human return receives under interactions with the cooperative agent. The cooperative policy is trained with a combination of a traditional agent value network and the proposed human gain-based value network. The authors test the RLHG framework in a cooperative game and find that across experiments with real humans, the RLHG agent is preferred over an agent without the human gain objective, despite having a lower overall win rate. Strengths: - The problem setting is interesting and relevant, and the proposed solution of optimizing for human gain is intuitive — it makes sense that a cooperative agent should account for improvements in human behaviours, rather than having the agent directly optimize for its own reward or what it perceives human rewards to be (which may reduce human enjoyment and overall autonomy in a task, as noted by the authors). - The experiments use a complex multiagent task and test both human models and real human participants. The results show a significant improvement in human preference for the RLHG agent across the predefined goals as well as various subjective metrics. Weaknesses: - The method seems sensitive to the choice of partner $\pi$ while collecting the primitive human episodes. If the initial partner is already very good, will its success be attributed to the human? And vice versa -- if the partner is very bad, would that lead to a false representation of the base human skill? The paper would be strengthened with additional studies on how sensitive the method is to different pretrained policies. - The method relies on knowing the set of human goals beforehand. It would be interesting to have additional analyses of how the method is affected in the case where some of the goals are missing or misspecified, which is more representative of a real-world scenario. Technical Quality: 3 good Clarity: 3 good Questions for Authors: What about scenarios where we don’t know human goals beforehand? This method relies on being able to accurately measure human rewards/goals which may not always be feasible (ill-defined goal spaces, human preferences changing over time). The authors comment on combining RLHG with goal inference methods, but those methods may still be faulty or difficult to learn. Could this method be applied to more general metrics of 'gain'-- e.g. increasing empowerment in the general assistance case? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors adequately discuss the limitations of this work. Additional discussion on societal impacts would be helpful, as these agents are trained to interact with people directly. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for carefully reviewing our paper! We greatly appreciate your positive feedback on our work. We provide clarification below for your questions and concerns. If you have any further questions or comments, we will be happy to discuss them further. ---- **Q1**: how sensitive the method is to different pretrained policies. **A1**: Our research is motivated by the desire to enable (surpass) human-level agents to assist humans in achieving their goals in complex collaborative environments. Although the RLHG framework does not impose constraints on pre-trained agents, its practical utility may be limited for low-level agents, as these agents may not possess the innate ability to assist human players. For high-level agents (such as the Wukong agent), the human-agent test results show that both objective metrics and subjective preferences of participants teaming with RLHG agents are better than those of teaming with Wukong agents, which verifies the effectiveness of the RLHG method. **Q2**: Could this method be applied scenarios where we don’t know human goals? **A2**: The experimental results presented in Section 4 indicate that the RLHG method can effectively enhance human performance in achieving goals in scenarios where human goals are known. In more general scenarios where human goals are not directly accessible (e.g., when they are implicitly included in human states and trajectories), combining the RLHG method with empowerment-based approaches like AvE [1] is promising. Specifically, the diversity of final states (human performance related) can serve as a proxy for measuring human empowerment capacity. This proxy can be viewed as an intrinsic "human reward" and can be directly integrated into the RLHG framework. To implement this approach, the RLHG method initially trains a value network to estimate primitive human empowerment using human-agent team play trajectories. Subsequently, the RLHG method trains a gain network that aims to effectively improve human empowerment. **Q3**: Lack of discussion on societal impacts. **A3**: We discussed the societal impacts of our work, including its implications for the AI research community and real-world applications, in Section E of the Appendix. We have now moved this discussion to the main text to underscore its significance. ---- [1] Du, et al. AvE: Assistance via empowerment. NeurIPS'2020. --- Rebuttal Comment 1.1: Comment: Thank you for the response and clarifications. I think it would be helpful to add some of the answers above to the draft (impact of quality of pretrained agent, etc.). I will be keeping my current score.
Summary: This paper focuses on the fine-tuning of a pre-trained agent to assist and enhance the performance of a given human model in achieving specific goals. The authors assume access to a human model and a pre-trained agent. The authors propose a two-step approach. 1. The human model's initial performance is determined by training a value network to estimate its effectiveness in goal attainment using episodes generated through joint execution with the agent. 2. The is trained agent to learn effective behaviors for enhancing the human model's performance using a gain network that estimates the improvement in human return when compared to the initial performance. The algorithm is evaluated on a Multi-player Online Battle Arena (MOBA) game. Strengths: The idea of developing algorithms to assist humans to solve tasks is interesting and of great practical interest, and this paper makes headway in this direction. The authors conduct extensive experiments which includes evaluating using real players to test their algorithm in a game. Weaknesses: 1. The algorithmic contribution in this paper appears to be relatively modest, as it mainly builds upon the existing Proximal Policy Optimization (PPO) approach. The authors introduce a gain function, which essentially computes an advantage by comparing it to another state-dependent baseline ( $V_\phi(s)$) 2. The assumption of having a human model is justifiable; however, the strong reliance on assuming knowledge of human goals, in my opinion, limits the direct applicability of this research (as acknowledged in the limitations section). 3. There is a lot of notational ambiguity in the paper (Section 2.2), which makes reading a little hard. For example, 3a. Advantage function generally depends on state/observation and action. In this setting, the Advantage is independent of both. 3b. Is V value of a state or the infinite horizon discounted reward? It is unclear as its used in both contexts. 3c. G is used as return-to-go, which should be a state dependent function. 4. I do not understand the exact need for the gain network. What would happen if in line 12 of the algorithm, you drop the - Gain(s) part? This essentially means that you are computing advantage with respect to the human primitive baseline. 5. The authors have invested significant effort and computational resources in conducting their experiments, making it extremely challenging to reproduce or recreate such experiments due to the demanding compute requirements. Although this does not diminish the value of their work, it would be beneficial if the authors could incorporate simpler environments into their experimental setup. This addition would aid in evaluating the algorithm's performance and further validate its quality. Technical Quality: 3 good Clarity: 3 good Questions for Authors: See weaknesses. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: The authors discuss limitations and future work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for carefully reviewing our paper! We greatly appreciate your positive feedback on our work. We provide clarifications below for your questions and concerns. If you have any further questions or comments, we will be happy to discuss them further. ---- **Q1**: Clarification on Contributions. **A1**: The RLHG algorithm is a key component of our contribution, but our work extends beyond this. Specifically, we have made the following contributions: - We explored approaches to enable (surpass) human-level agents to assist humans in achieving their goals in complex collaborative environments. - Through this exploration, we gained insights into problems like human-agent credit assignment issues. These issues can cause agents to learn many ineffective enhancement behaviors that hinder their original goal of winning the game. - To address this challenge, we propose the RLHG method and provide a detailed implementation framework. - We conducted human-agent tests on the popular MOBA game Honor of Kings. Both objective metrics and subjective preferences of the participants verified the effectiveness of our proposed method. **Q2**: Clarification on Known Human Goals Setting. **A2**: The setting of our research work is in scenarios where human goals are known, in complex collaborative environments. In more general scenarios where human goals are not directly accessible, combining the RLHG method with goal inference approaches like Bayesian Delegation [1] and empowerment-based approaches like Assistance via Empowerment (AvE) [2] is promising. For example, consider combining the RLHG method with the AvE method. The diversity of final states (human performance related) can serve as a proxy for measuring human empowerment capacity. This proxy can be viewed as an intrinsic "human reward" and can be directly integrated into the RLHG framework. **Q3**: Clarification on Notational Ambiguity. **A3**: We have clarified and disambiguated the notations used in the paper to improve its readability and comprehension, including but not limited to: - Advantage function $A$ -> $A(s_t,a_t)=Q(s_t,a_t)-V(s_t)$ - Value function $V$ -> $V(s_t)=E[G_t|s_t]$ - Return $G$ -> $G_t=\sum_{k=0}^\infty \gamma^k R_{t+k+1}$ **Q4**: Clarification on Gain Network. **A4**: The gain network $Gain_\omega(s)$ is a value network that estimates the expected benefit of human returns $Gain^{\pi_\theta^{ef}, \pi^H}$ resulting from effective enhancement versus no enhancement (human primitive value) for a given state $s$. We refer to this benefit as **Human Positive Gain**. The role of the gain network $Gain_\omega(s)$ in line 12 of the algorithm can be interpreted from two perspectives: (1) Advantage of return over effective enhancement value. In state $s$, it measures how much better taking a specific action $a$ compared to the expected value $V_\phi(s)+Gain_\omega(s)$ of taking any effective enhancement action. (2) Advantage of gain over expected positive gain. In state $s$, it measures how much better taking a specific action $a$ compared to the gain of taking any action that can produce a positive gain. By $- Gain_\omega(s)$, the agent can be guided to take actions with higher potential for human returns, rather than merely actions that exceed the human primitive value. **Q5**: Clarification on Experimental Environment. **A5**: Thank you for your suggestions. Our research is motivated by the desire to enable high-level agents to assist humans in achieving their goals in complex collaborative environments. In future work, we plan to apply the RLHG framework to games such as Google Football and StarCraft, to facilitate easier reproduction of our method by other researchers as well. ---- [1] Sarah, et al. Too many cooks: Bayesian inference for coordinating multi-agent collaboration. TOPICS'2021. [2] Du, et al. AvE: Assistance via empowerment. NeurIPS'2020. --- Rebuttal Comment 1.1: Comment: Thank you for your and response and the change in notation. I think it will improves the readability of the paper. I retain my positive endorsement of the work.
Rebuttal 1: Rebuttal: Thanks to all reviewers for carefully reviewing our paper! We are grateful for your valuable feedback and suggestions, which we have addressed and incorporated into the revised manuscript. If you have any further questions or comments, we would be more than happy to discuss them. Moreover, we have included the experimental findings of Appendix C.1 (Ablation study on the balance parameter $\alpha$ for Win Rate and Human Performance) and Appendix C.2 (Experiments on the adaptive adjustment mechanism) in the supplementary PDF for your reference, if necessary. Pdf: /pdf/6dd85ee2a0146c7462862168647252148bbfddf0.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Cross-modal Prompts: Adapting Large Pre-trained Models for Audio-Visual Downstream Tasks
Accept (poster)
Summary: The authors propose a novel dual-guided spatial-channel-temporal attention mechanism to audio-visual problems, which leverages pre-trained audio and visual encoders. And they show the improvement in various audio-visual tasks such as event localization, parsing, segmentation, and question answering. Strengths: - This work applied proposed methods and evaluated with diverse audio-visual downstream tasks, including localization, parsing, segmentation and question answering. This helps to showcase the generalization of proposed approach. Weaknesses: - The use of notations in Section 3.3 is very complicated and difficult to follow. Consider utilize Figure 2 (4) for illustration and walk through each step along with the modules in the figure to make it easier for the readers to follow. - For the results presented in Table 2 and 3, and discussed in Section 4.3, only the better performance is highlighted without providing potential explanation and/or hypothesis why proposed system performed worse in some scenarios. For example, the performance on event-level as in Table 2, and the performance with MS3 of segmentation in Table 3. Consider adding some discussion for these cases. - Some abbreviations are referred to without any information provided, such as LLP in line 210, and CMBS in line 218. Consider adding one-liner explanation to help the reader. - Minor comments: Section 3.2 in equation (1), the first superscript "l" does not follow the same style as others. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: - Do you need to use the same number of layers in the Swin-T and HTS-AT? Since they share the same notation in all the equations for both audio and visual paths. - In Section 3.1, line 127, where does T come from? Could you provide more detail on the choices of mel-spectrogram? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: - No potential social or ethical implications. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer TYiB, Thank you so much for giving us positive feedback and the very insightful questions. Please see our point by point responses below. ------ **Weakness 1 - The use of notations in Section 3.3 is very complicated and difficult to follow.** Thank you for your suggestion. We will revise it. After revising it like this, we will consider adding this passage to the new paper: 3.3 ...., our proposed DG-SCT module can achieve triple levels of information highlighting in two directions **(See Fig. 2 (4))**. .... **Channel-wise attention:** .... Similarly, we generate V2A channel attentive maps $M_t^{ac}$ **(See Fig. 2 (4))**. .... **Spatial-wise attention:** .... Similarly, we generate V2A frequency attentive maps $M_t^{af}$ **(See Fig. 2 (4))**. .... **Temporal-gated attention:** .... Similarly, for V2A, we feed the spatial-channel attentive visual features to generate $G^a$ **(See Fig. 2 (4))**. .... **Summary:** .... We combine the abovementioned three attention mechanisms together to form our DG-SCT module. **Walking through each step along with Fig. 2 (4).**..... ------ **Weakness 2 - Only the better performance is highlighted without providing potential explanation and/or hypothesis why proposed system performed worse in some scenarios.** Thank you for your concern. We will add more explanation for the results presented in Table 2 and 3, and discussed in Section 4.3 in the future: In the experimental results presented in Table 2 and Table 3, we observed that for the scenes where our performance was not higher, it is comparable to the state-of-the-art results reported in previous studies. However, a notable decline is observed in the audio scene depicted in **Table 2**. Based on our analysis, it is plausible that the lower robustness of HTS-AT, compared to previous audio encoders such as VGG-like, may be a contributing factor to this decline. ------ **Weakness 3 - Some abbreviations are referred to without any information provided.** Thank you for your suggestion. LLP refers to Look, Listen, and Parse dataset for audio-visual video scene parsing. CMBS refers to Cross Modal Background Suppression for Audio-Visual Event Localization. We have already provided some details about the datasets and tasks in **Appendix B**, specifically on **line 15**. However, we acknowledge that we overlooked the explanation of abbreviations in our previous discussion. Additionally, we will provide a concise introduction to the abbreviations in our Appendices. ------ **Weakness 4 - Section 3.2 in equation (1), the first superscript "l" does not follow the same style as others.** We would like to extend our sincere appreciation for your dedicated effort in carefully reviewing our work! This issue has been addressed. ------ **Question 1 - Do you need to use the same number of layers in the Swin-T and HTS-AT?** Thank you for your insightful question, we are so glad that you asked that. No, we don't need to use the same number of layers, or the same size of dimensions. To address the dimension asynchrony problem, **"we first use a two-dimensional convolution kernel and a linear projection to make the dimensions of the audio and visual prompts consistent of their counterpart modality", line 139**. For full-training settings for AVE/AVS/AVVP/AVQA tasks, although both Swin-V2-L and HTS-AT has **4** layers, the numbers of the blocks of each layer are not the same, with **2, 2, 18, 2** and **2, 2, 6, 2**, respectively. We designed an approach that the layers with inconsistent block counts **(e.g., the third layer, 18 and 6, respectively)** engage as many rounds of cross-modal interactions as possible **(e.g., 6)**, uniformly. So you see, the number of layers or the size of dimensions does not impose any limitations. Our approach is applicable to any transformer architecture. ------ **Question 2 - In Section 3.1, line 127, where does T come from? Could you provide more detail on the choices of mel-spectrogram?** Thank you for asking. We split the mel-spectrogram into **T** non-overlapping segments, each segment is **1** second long. Spectrograms, in simple terms, involve dividing an audio waveform into small windows. Each window undergoes a short-time discrete Fourier transform to obtain frequency and amplitude information; As for Mel Scale, mathematically speaking, is the result of some non-linear transformation of the frequency scale. This Mel Scale is constructed such that sounds of equal distance from each other on the Mel Scale, also “sound” to humans as they are equal in distance from one another. The mel-scale is a logarithmic scale that captures the perceptual sensitivity of human hearing to different frequencies. Mel-spectrogram, which leverages the mel-scale for effective characterization of audio signals, is commonly used in audio representation. Thank you for your feedback. We hope the aforementioned clarifies any doubts or inquiries you may have had. --- Rebuttal Comment 1.1: Comment: Thank for the authors for your responses and clarifications. I would suggest integrating these clarification into the final paper if possible. I am maintaining my rating. --- Reply to Comment 1.1.1: Comment: Dear Reviewer TYiB, Thank you so much for checking our response. We are glad that you keep the original positive rating.
Summary: This paper proposes a parameter-efficient approach, DG-SCT. DG-SCT can adapt pre-trained audio and visual models on downstream audio-visual tasks without updating pre-trained encoders (i.e., keep pre-trained encoders frozen.) Strengths: $+$ DG-SCT can achieve state-of-the-art results on several downstream audio-visual tasks. $+$ DG-SCT is able to learn channel-wise, spatial, and temporal information to incorporate with pre-trained audio and visual encoders. Weaknesses: $-$ Although DG-SCT leverages different types of cross-modal attention (i.e., channel-wise, spatial, and temporal), the design is not clear. For example, DG-SCT leverages RNN for modeling cross-modal temporal information and learnable weights for cross-modal spatial information. Such a technique can be implemented with divided (space-time) attention after channel-wise attention. $-$ The effectiveness of the proposed temporal modeling is not clear. The baselines for the implementation of DG-SCT in AVE/AVVP/AVQA have already cross-modal temporal modeling modules. $-$ The number of trainable parameters is not reported. For example, the baselines in AVVP and AVE usually use pre-extracted audio and visual features, which contribute to a lower number of trainable parameters. $-$ Lack of efficiency comparison (e.g., FLOPs). Combining several attention mechanisms will lead to huge computational costs. It would be great to include these metrics. $-$ some experimental settings and results are not clear (See. Questions) Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: * In AVE task, CMBS uses much weaker audio and visual encoders. How does CMBS with Swin-V2-L and HTS-AT work? Why does CMBS drop from 79.3 to 78.6 in Table 5? * In AVVP task, the results of MGN in Table 2 are lower than the numbers in the paper. Does DG-SCT use [A], which is also used in MGN? * Similar to the question in AVE task, how does AVS with Swin-V2-L and HTS-AT work in AVS task? (the performance is also dropped in Table 5) * Are the baselines in zero/few-shot tasks also pre-trained in VGGSound (40K)? If yes, that would be fair to compare with DG-SCT. [A] Wu, Yu, and Yi Yang. "Exploring heterogeneous clues for weakly-supervised audio-visual video parsing." CVPR 2021 Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: Yes, DG-SCT uses more number of trainable parameters Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer myp2, Thank you so much for taking the time to read our paper and providing very insightful and constructive comments and questions, please see the following for our point-by-point reply. We also conducted quite many new experiments to address your concerns. ------ **Weaknesses 1 - Such a technique can be implemented with divided (space-time) attention after channel-wise attention.** We understand the reviewer's point. The reason we leverages RNN for modeling cross-modal temporal information is that, in **line 180**, **Given an audio, significant time segments (e.g., "engine sound") should be emphasized, while background information (e.g., "silence") should be attenuated. The same holds for the visual information as well.** We made an effort to incorporate divided (space-time) attention following channel-wise attention, but regrettably, the obtained results were not encouraging. This can be attributed to the absence of guidance from the other modality, which plays a pivotal role in audio-visual scenarios. ------ **Weaknesses 2 - The effectiveness of the proposed temporal modeling is not clear.** Thank you for asking! We understand you concern. In **4.5 Ablation analysis, line 263,** though we acknowledged that **"removing the 'T' module individually does not result in a significant decrease in the performance of the model"** because **"if features of a certain segment are more prominent in both spatial and channel dimensions, it is more likely to be important in the temporal dimension as well"**, adding the "T" module (temporal modeling) did make a difference. For example, after adding our "T" module, the results raised from **51.8** and **62.3** to **53.5** and **64.2**, respectively, in the MS3 setting of AVS task, even though the downstream model already has temporal modeling for late interaction. ------ **Weaknesses 3 - The number of trainable parameters is not reported.** Thank you for your suggestion. We apologize for omitting the information about the number of trainable parameters in our initial submission. We will include it in our Appendix section. Please find the updated information below (AVE task): | Method | Trainable Params (M) | Total Params (M) | Acc | | ------- | -------------------- | ---------------- | -------- | | CMBS | 14.4 | 216.7 | 79.3 | | LAVisH | 10.1 | 238.8 | 79.7 | | LAVisH* | 30.6 | 374.9 | 78.6 | | Ours | 43.6 | 461.3 | **82.2** | In "LAVisH*", we modified LAVisH to use the same encoders as ours for fair comparison. In contrast to retraining the complete backbones, we have effectively achieved a substantial reduction in the number of trainable parameters. Furthermore, our approach has demonstrated a remarkable improvement in performance compared to other baselines while increasing acceptable parameter count. ------ **Weaknesses 4 - Lack of efficiency comparison (e.g., FLOPs).** We thank the reviewer for pointing this out. We acknowledge that our previous consideration may have been inadequate, since the audio-visual task is not inherently time-sensitive, and our previous baselines did not report FLOPs information. Here, we will add the efficiency comparison on AVE task in Appendices: | Method | GFLOPs | Acc | | ------- | ------ | -------- | | CMBS | 40.9 | 79.3 | | LAVisH | 406.7 | 79.7 | | LAVisH* | 416.1 | 78.6 | | Ours | 460.8 | **82.2** | In "LAVisH*", we modified LAVisH to use the same encoders as ours for fair comparison. Our approach involves fine-tuning the pre-trained model, which unavoidably results in a decrease in speed compared to the previous late-interaction baselines (CMBS). However, we have achieved excellent results and our approach is applicable to multiple audio-visual tasks. Overall, the benefits are substantial. ------ **Question 1 and 3 - Settings and results in AVE task and AVS task.** We thank the reviewer for the valuable input. The original LAVisH baseline uses Swin-L-V2 to encode both audio and visual modalities. For fair comparison, we modified LAVisH with Swin-L-V2 and HTS-AT, same as ours, to encode visual and audio modalities, respectively. The reason why the results dropped to **78.6** may be attributed to the rudimentary approach of LAVisH, which utilizes latent tokens for cross-attention. The coarse extracted cross-modal information may not adequately counteract the negative effects of domain gaps introduced by the use of different encoders. ------ **Question 2 - Settings and results in AVVP task.** Thank you for your question. No, we didn't use [A] in MGN because MGN does not open source the code about using [A]. The results correspond to MGN paper without using [A]. ------ **Question 4 - Are the baselines in zero/few-shot tasks also pre-trained on VGGSound (40k)?** Thank you for asking! This is a very insightful question. Yes, the baselines are also pre-trained on VGGSound (40K). --- Rebuttal Comment 1.1: Comment: Thanks for providing the response. I partially address my concern. However, weaknesses 3 and Question 1 are still remaining. 1. DG-SCT uses more trainable parameters than LAVISH (parameter-efficient method) and CMBS (conventional method). 2. More issues come from the table in weaknesses 3. For example, CMBS uses fewer trainable parameters and weaker pre-trained audio and visual encoders. 3. LAVisH* may not be implemented correctly. If the authors just replace the pre-trained audio and visual encoders for LAVISH, the trainable parameters should be similar to 10.1M. Overall, I think the comparison is completely fair. Thus, I keep my rating the same --- Reply to Comment 1.1.1: Comment: Dear Reviewer myp2, Thank you so much for checking our response. Let us respond to your questions point by point. **Question 1 and 3 - DG-SCT uses more trainable parameters; LAVisH\* may not be implemented correctly.** Thank you for your question. Yes, our approach utilizes more trainable parameters. However, our proposed Dual-guided spatial-channel-temporal attention mechanism requires a comparable number of parameters to the latent tokens utilized in LAVisH. The increase in trainable parameters mainly stems from the inclusion of a two-dimensional convolution kernel and a linear projection. These additions ensure the consistency of dimensions between the audio and visual prompts. | Method | Trainable Params (M) | Total Params (M) | Acc | | ------- | -------------------- | ---------------- | -------- | | CMBS | 14.4 | 216.7 | 79.3 | | LAVisH | 10.1 | 238.8 | 79.7 | | LAVisH* | 17.3+**13.3**=30.6 | 374.9 | 78.6 | | Ours | 17.3+**26.3**=43.6 | 461.3 | **82.2** | In "LAVisH*", we modified LAVisH to use the same encoders as ours for fair comparison. In Column Trainable Params (M), Row "LAVisH*" and Ours, the first number (17.3) represents the trainable parameters of a two-dimensional convolution kernel and a linear projection. In other words, the increase in parameter count from our approach primarily stems from the inconsistency in dimensions between the audio and video encoders. Unlike LAVisH, which utilizes the same encoder for both audio and video, the dimension inconsistency in our method leads to a higher number of trainable parameters. This also addresses question 3 -- Yes, LAVisH* is implemented correctly. Apart from addressing the dimension inconsistency between the audio and visual encoders, the parameters of LAVisH* (13.3M) and LAVisH (10.1M) are comparable. Our model demonstrates strong versatility. It performs effectively across multiple tasks and shows significant improvements compared to LAVisH (a parameter-efficient method), which is relatively poorer on the AVVP task and AVS MS3 task, and CMBS (a conventional method), which can only be used in AVE task. Here are the results showcasing the AVVP and AVS MS3 tasks: | Method | Segment-level | | | | | Event-level | | | | | | :----: | :-----------: | -------- | -------- | -------- | -------- | ----------- | -------- | -------- | -------- | -------- | | | A | V | AV | Type | Event | A | V | AV | Type | Event | | HAN | 60.1 | 52.9 | 48.9 | 54.0 | 55.4 | **51.3** | 48.9 | 43.0 | 47.7 | 48.0 | | MGN | **60.7** | 55.5 | 50.6 | 55.6 | **57.2** | 51.0 | 52.4 | 44.4 | 49.3 | **49.2** | | LAVisH | 57.2 | 54.3 | 50.4 | 54.9 | 56.3 | 47.4 | 51.2 | 45.1 | 49.0 | 48.5 | | Ours | 59.0 | **59.4** | **52.8** | **57.1** | 57.0 | 49.2 | **56.1** | **46.1** | **50.5** | 49.1 | | Method | Mj | Mf | | ------ | -------- | -------- | | AVS | **54.0** | **64.5** | | LAVisH | 49.8 | 60.3 | | Ours | 53.5 | 64.2 | It can be observed that LAVisH fails to achieve satisfactory performance in challenging tasks such as AVVP and AVS MS3. In many settings, it even lags behind the previous baseline. This is attributed to the coarse extraction of cross-modal information through the latent tokens. Our method, however, achieves state-of-the-art or comparable results. **Question 2 - More issues come from the table in weaknesses 3. For example, CMBS uses fewer trainable parameters and weaker pre-trained audio and visual encoders.** We thank the reviewer for pointing this out. As mentioned above, the increase in parameter count from our approach primarily stems from the inconsistency in dimensions between the audio and visual encoders. As for the "weaker pre-trained audio and visual encoders" problem, the LAVisH paper has already replaced the encoders of CMBS for fair comparison. By replacing the visual encoder with Swin-V2-L, CMBS achieves an accuracy of **80.4%** on the AVE task. Therefore, it is justifiable to claim that our method (**82.2%**) represents the state-of-the-art. Furthermore, our method pioneers the better application of pre-training models in the multimodal domain.
Summary: This work proposes a new mechanism to utilize audio-visual features as novel prompts to extract task-specific features from large-scale models. This work introduces an attention mechanism named Dual-Guided Spatial-Channel-Temporal (DG-SCT), which utilizes audio and visual modalities to guide the feature extraction of their respective counterpart modalities across spatial, channel, and temporal dimensions. The proposed method is evaluated on a series of tasks including Audio-visual event localization, Audio-visual video parsing, Audio-visual segmentation, and Audio-visual question answering. Moreover, it proposes a new benchmark to perform Audio-visual few-shot/zero-shot tasks on AVE and LLP datasets. Strengths: - This is a very interesting work. The use of prompting is primarily focused on language and later vision, notably this work performs audio-visual prompting, which seems quite innovative. - Moreover, unlike previous works that offer unidirectional prompts, the proposed approach introduces bidirectional prompts, where both visual and audio modalities can mutually guide each other in the feature extraction process. - The proposed approach is evaluated on several benchmarks and compared fairly with prior works showing the effectiveness of the proposed method. - It's a nicely written paper and easy to follow. Weaknesses: Please see #Questions for more open-ended discussions. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. The proposed mechanism more resembles with Adapters than Prompts. Prompts usually work in the input space, while the proposed DG-SCT mechanism is also added in the intermediate layers. Moreover, prompting generally assumes no access to the models' internal architecture, but the proposed mechanism needs the models' access to modify/adjust some of the intermediate layers to connect the DG-SCT module. 2. Often audio and video representations suffer from substantial domain gaps due to their modality-specific nature, did you encounter such scenarios? 3. The proposed method is based on the Swin transformer, could you please elaborate if your method can be adapted to other Transformer architectures (e.g., other ViT variants)? If so, what necessary changes need to be done or is it a Swin-specific solution? 4. In this study, the pretrained encoders are trained in a uni-modal setup, do you think the proposed solution is equally effective if multimodal pretrained networks are used? Could you please run an experiment on such a setup using any state-of-the-art AV Transformer model? 5. Did you try validating your method on modality-agnostic backbones or can you show results on such setup? modality-agnostic backbones refer to when the same network is trained with both modalities, e.g., VATT (https://arxiv.org/abs/2104.11178), XKD (https://arxiv.org/pdf/2211.13929.pdf). Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Please see #Questions. My questions are mostly open-ended. I will look forward to the discussion with the authors and the arguments of the other reviewers. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer VeZU, Thank you for the positive feedback and all these very valuable and constructive questions/suggestions! Let us respond to your questions point by point. ------ **Question 1 - The proposed mechanism more resembles with Adapters than Prompts.** Thank you very much for you kind suggestion! We initially referred to our proposed mechanism as ‘prompts’ to represent the guidance provided to the learnable weights in preceding layers. It emphasizes using audio and video as prompts to guide the representation of the counterpart modalities. However, we acknowledge that this expression may have been imprecise. Upon considering your valuable suggestion, we will name it as ‘adapters’ instead. We sincerely appreciate your valuable input and assure you that we will carefully consider incorporating this change. ------ **Question 2 - Did you encounters domain gaps?** We thank the reviewer for pointing this out. The reviewer is very correct that domain gaps exist in the field of multi-modal tasks due to the unique characteristics of each modality. **For full-training settings** like AVE/AVS/AVVP/AVQA tasks, during the model designing process, we carefully considered employing methods such as contrastive learning to reduce the gaps between modalities. However, we observed limited effectiveness in addressing this issue. The reason is that our proposed method focuses on fine-tuning a minimal number of parameters in the encoder of the current modality using prompt information derived from the counterpart modality. The majority of parameters remain frozen, thereby effectively mitigating the domain gap problem while enriching the modality information. After late interaction and fusion in the downstream models, our method is able to achieve promising results. **For zero-shot/few-shot scenarios**, as mentioned in **line 228, our method "generates audio and visual features for text-audio and text-image contrastive learning"**. Here we use two text branches instead of one **(See Appedices Figure 1 "Right")** to address the domain gaps of visual and audio modalities. ------ **Question 3 - Could you please elaborate if your method can be adapted to other Transformer architectures?** Thank you for asking this meaningful question! **Our method is not a Swin-specific solution.** As mentioned in **4.2 Implementation details, line 227**, in our few-shot/zero-shot scenarios, **"We incorporate DG-SCT modules as adapters between the frozen CLIP image encoder ViT and frozen CLAP audio encoder HTS-AT"**. Our method can be adapted to other transformer variants as well. Since different transformer architectures may have different dimensions and layers. First, as mentioned in **line 139**, **"we first use a two-dimensional convolution kernel and a linear projection to make the dimensions of the audio and visual prompts consistent of their counterpart modality"**. For full-training settings like AVE/AVS/AVVP/AVQA tasks, although both Swin-V2-L and HTS-AT has **4** layers, the numbers of the blocks of each layer are not the same, with **2, 2, 18, 2** and **2, 2, 6, 2**, respectively. We designed an approach that the layers with inconsistent block counts **(e.g., the third layer, 18 and 6, respectively)** engage as many rounds of cross-modal interactions as possible **(e.g., 6)**, uniformly. So you see, the number of layers or the size of dimensions does not impose any limitations. Our approach is applicable to any transformer architecture. ------ **Question 4 and 5 - Do you think the proposed solution is equally effective if multimodal pretrained networks are used?** Regarding the incompatibility issue between the official VATT code, which is implemented in TensorFlow, and our PyTorch version, as well as the unavailability of open-source code for XKD, we have indeed encountered some challenges. However, we have identified a recent study on modality-agnostic backbones called meta-transformer. Please see the following table for the results on AVE task: | Method | Acc | | ---------------------------- | -------- | | LAVisH | 79.7 | | LAVisH* | 78.6 | | Ours | **82.2** | | LAVisH with meta-transformer | 74.8 | | Ours with meta-transformer | 77.5 | In "LAVisH*", we modified LAVisH to use the same encoders as ours for fair comparison. However, the results were not ideal. Merely replacing the baseline encoders with the modality-agnostic backbone of the meta-transformer resulted in significant performance loss. This indicates that current modality-agnostic backbones may not perform well in the domain of audio-visual downstream tasks. Nevertheless, incorporating our method into the modality-agnostic backbone still yields performance improvements, highlighting the robustness of our approach. We express our gratitude to the reviewer for highlighting this aspect. Indeed, the utilization of a modality-agnostic backbone appears to be a highly promising trend. We will continue to stay updated with relevant research endeavors and anticipate its eventual suitability for audio-visual downstream tasks in the future.
Summary: This paper mainly proposes a new attention mechanism named Dual-Guided Spatial-Channel-Temporal (DG-SCT), which utilizes audio and visual modalities to guide the feature extraction of their respective counterpart modalities across spatial, channel, and temporal dimensions. Experiments on 4 tasks shows the advantage of the proposed method. Strengths: Overall it is a nice paper. - The presentation of this paper is very clear. - Experiments are extensive and appear solid. - From a high level, I think a better audio-visual attention mechanism of this type would benefit a series of downstream tasks. Prompting is a trend to make audio-visual systems smarter. Weaknesses: With the above said, I am not sure if the claim on Page 1 lines 25-27 is valid for all audio-visual tasks. "However, when perceiving the roaring sound of an engine, the visual region depicting a "car" should receive more attention than the region of "trees". Simultaneously, when observing the car, it is crucial to concentrate on the audio segments of the engine sound." it is true for tasks about audio-visual correspondence like retrieval/localization/segmentation, etc. But in other tasks like audio-visual joint classification, we do want to leverage the information that uniquely appears in a single modality to make predictions. This is because if we only use mutual information, then a single modality is enough, what we are looking for is information not appear in one modality but can be found in the other one. The proposed method seems to attend to the mutual information. I am wondering if it would negatively impact the performance of joint classification tasks. - minor: Page 2, line 54, it should be HTS-AT, not HT-SAT. Technical Quality: 3 good Clarity: 3 good Questions for Authors: For the evaluation tasks, it would be nice to have a joint classification task such as AudioSet classification. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: There is one sentence limitation "We consume a few more parameters than LAVisH." I don't think this weakens the proposed method. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer BzrV, Thank you for taking the time to consider our paper and giving us positive feedback! ------ Regarding your question, **"I am wondering if it would negatively impact the performance of joint classification tasks."** It is a very good question. In joint classification task, mutual information is critical as well. For example, when considering only the visual modality, a radio appears motionless. However, by taking into account the audio, one can determine whether the radio is producing sound. Similarly, when only listening to the audio, the sound of turning on and off a light may sound similar. It is only by combining the visual information that we can determine whether the light is on or off. By leveraging the information from both modalities, a more comprehensive and complete understanding can be achieved. It seems you are referring to the consideration of audio-visual asynchrony and the potential side effects of our proposed method. In **Appendix D, line 86**, an ablation study was conducted to alleviate this concern. We compare our final bidirectional approach with the unidirectional variants that only use either **audio-to-visual (A2V)** or **visual-to-audio (V2A)** spatial-channel-temporal attention mechanism. As indicated in **Table 3 (also depicted below)**: | Module | | Segment-level | | | | | Event-level | | | | | | :----: | ---- | :-----------: | -------- | -------- | -------- | -------- | ----------- | -------- | -------- | -------- | -------- | | A2V | V2A | A | V | AV | Type | Event | A | V | AV | Type | Event | | - | - | 57.8 | 56.3 | 49.8 | 55.2 | 54.9 | 48.2 | 51.7 | 43.9 | 48.8 | 47.6 | | √ | - | 56.4 | **59.5** | **53.3** | 56.4 | 55.0 | 47.4 | 55.9 | **46.3** | 49.9 | 47.8 | | - | √ | **59.4** | 57.3 | 50.8 | 55.8 | 56.8 | 49.2 | 54.1 | 44.4 | 49.2 | 48.6 | | √ | √ | 59.0 | 59.4 | 52.8 | **57.1** | **57.0** | **49.2** | **56.1** | 46.1 | **50.5** | **49.1** | we observe that using the A2V module alone does not significantly decrease the accuracy for visual and audio-visual events. However, without visual guidance (V2A), the performance of audio events suffers a considerable decline; Likewise, the performance of visual events drops without audio guidance (using the V2A module alone). **These experimental findings demonstrate the necessity of visual guidance for audio events and the need for audio guidance for visual events.** Our proposed DG-SCT model can bidirectionally guide the representation of each modality, thus enhancing the accuracy of downstream audio-visual tasks. Based on our analysis, we have determined that the benefits of incorporating bidirectional modality guidance for capturing rich information outweigh the interference caused by audio-visual asynchrony. However, we acknowledge the importance of addressing the issue of audio-visual asynchrony in future work. Specifically, we plan to explore techniques during the representation phase to mitigate this problem, aiming to further enhance the performance of our model in the future. Moreover, we have added experiment on **VGG-Sound (40K) classification** (VGG-Sound dataset is the subset of AudioSet). The results are as follows: | Method | Acc | | :----: | :------: | | A+V | **69.7** | | A | 63.8 | | V | 60.6 | The results demonstrate that incorporating both audio and visual information simultaneously leads to improved performance in audio-visual joint classification task. ------ minor issue: **"Page 2, line 54, it should be HTS-AT, not HT-SAT."** We would like to extend our sincere appreciation for your dedicated effort in carefully reviewing our work! This issue has been addressed. --- Rebuttal Comment 1.1: Title: Discussion Comment: Dear authors, Thanks so much for the explanation and for pointing me to the appendix. >In joint classification task, mutual information is critical as well. For example, when considering only the visual modality, a radio appears motionless. However, by taking into account the audio, one can determine whether the radio is producing sound. Similarly, when only listening to the audio, the sound of turning on and off a light may sound similar. Don't these examples show that modality-unique information (rather than modality-mutual information) is important for joint classification? As the radio sound is missing in the visual modality, and the turning on/off information is missing in the visual modality? Do the authors actually mean "mutual object"? >It seems you are referring to the consideration of audio-visual asynchrony and the potential side effects of our proposed method. In Appendix D, line 86, an ablation study was conducted to alleviate this concern. This ablation is very interesting. I like this experiment. >Moreover, we have added experiment on VGG-Sound (40K) classification (VGG-Sound dataset is the subset of AudioSet). The results are as follows ... Is it true that VGG-Sound is a subset of AudioSet? If so, can the authors provide a reference on this? The original VGG-Sound is 200k, how the 40k set is selected? Is this fair to use the 40k results to compare to other papers reporting on 200k? Besides the question regarding the dataset, I actually think it is quite easy for a model to have better av performance than a or v-only performance, but what I really want to know is that - comparing a model that explicitly focuses on modality-mutual information (what is proposed) and a model without such focus (e.g., say MBT from Google, or a variant of the proposed method that without such focus), which would be better? I understand the authors may not have time to do this experiment, but want to know the authors' opinion. Anyways, I don't believe joint classification is a fatal problem of this paper, as many a-v applications do require mutual information. --- Reply to Comment 1.1.1: Comment: Dear Reviewer BzrV, Thank you so much for your reply! Let us respond to your questions point by point. > Don't these examples show that modality-unique information (rather than modality-mutual information) is important for joint classification? As the radio sound is missing in the visual modality, and the turning on/off information is missing in the visual modality? Do the authors actually mean "mutual object"? We strongly agree with the reviewer’s viewpoint that modality-specific information is indeed crucial for audio-visual tasks. However, it is undeniable that modality-mutual information also plays an important role in the context of audio-visual understanding. It enables the alignment and integration of both modalities, allowing the model to have a better understanding of audio-visual tasks. We apologize for any confusion caused by the lack of clarity in the examples we provided, which led to the reviewer’s misunderstanding. **While it is true that one can determine if a radio is emitting sound solely based on the audio modality, what about more challenging tasks? For instance, consider the task of locating a sound-emitting object within a video.** In such cases, audio can guide the visual modality, with a focus on the regions representing the radio object in the video and providing information on when the radio emits sound and when it doesn’t. This operation corresponds to the A2V module in our approach. Similarly, the V2A module enriches the audio modality with information guided by the visual modality. > Is it true that VGG-Sound is a subset of AudioSet? If so, can the authors provide a reference on this? The original VGG-Sound is 200k, how the 40k set is selected? Is this fair to use the 40k results to compare to other papers reporting on 200k? The origin of VGG-Sound can be traced back to the paper **“VGGSOUND: A LARGE-SCALE AUDIO-VISUAL DATASET.”** The paper does not explicitly mention whether VGG-Sound is a subset of AudioSet. The description in the paper states that VGG-Sound is a large-scale dataset consisting of over 200k video clips covering 300 audio classes sourced from YouTube videos. From what we recall, these 10-second videos are derived from processing AudioSet. However, since the paper lacks specific information on this matter, it is indeed an oversight on our part. We sincerely appreciate you bringing this issue to our attention. In the future, we may consider cross-referencing the YouTube IDs of the videos in VGG-Sound with those in AudioSet to validate whether VGG-Sound is a subset of AudioSet. Unfortunately, due to time constraints, we regret that we cannot provide you with a definitive answer at this moment. Regarding VGG-Sound 40k, it is derived from the paper **“Contrastive Positive Sample Propagation along the Audio-Visual Event Line.”** The paper collects the VGGSound-AVEL100k dataset, whereas our **VGG-Sound 40k dataset is 40,000 finely annotated videos of the VGGSound-AVEL100k provided by the authors**. It would not be fair to compare the results of the 40k dataset with other papers that report on the 200k dataset because the latter covers **300** classes while the former only comprises **141** classes. > Besides the question regarding the dataset, I actually think it is quite easy for a model to have better av performance than a or v-only performance, but what I really want to know is that - comparing a model that explicitly focuses on modality-mutual information (what is proposed) and a model without such focus (e.g., say MBT from Google, or a variant of the proposed method that without such focus), which would be better? We apologize for any misunderstanding and for not conducting the experiments you were hoping to see. In our opinion, models like MBT that do not explicitly focus on modality-mutual information may achieve comparable or even better results in coarse-grained tasks such as AudioSet and VGGSound. As you mentioned, these tasks often require only a single modality, where the goal is to extract information that is not present in one modality but can be found in the other. However, in more challenging downstream tasks such as AVE, AVVP, AVS, and AVQA, which are also the focus of our work, **our proposed method is expected to perform better**. Just like the aforementioned example, these tasks often require the integration of information from both modalities to achieve successful results. ------ Thank you again for raising the questions, and we hope our response has been helpful to you. Finally, we wish you all the best!
null
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Optimal Transport-Guided Conditional Score-Based Diffusion Model
Accept (poster)
Summary: This paper introduces a novel approach for training conditional score-based models. The proposed approach integrates the optimal-transport-based semi-supervised learning method into the popular score-based modeling framework with SDE sampling processes. The theoretical analysis presented in Section 4 showcases that the proposed method minimizes the 2-Wasserstein distance between the true and the modeled distributions. Furthermore, the experimental results in Section 5 demonstrate the superior performance of the proposed method in various applications, including super-resolution and image-to-image translation. Strengths: **Strengths** In general, this paper is well-written, displaying a clear and logical structure. The definitions and formulations are presented with clarity and precision. The strengths of this paper are listed as follows: 1. The authors have provided a thorough discussion of the design choices made in the proposed framework, considering both practical and theoretical aspects. For example, in Section 3.3, they specifically address the potential drawback of directly implementing Eq. (9) and introduce the resampling-by-compatibility technique as a solution to this issue. 2. The experimental results as well as the ablation analyses provided in Section 5 demonstrate the effectiveness of the proposed method. The presented results showcase that the proposed OTCS approach achieves state-of-the-art performance across multiple applications, such as super-resolution and image-to-image translation. 3. There has been a growing interest in extending the conditional generation capabilities of score-based models to semi-supervised and unsupervised learning scenarios. This paper makes a significant contribution by providing a comprehensive exploration of this objective. Weaknesses: **Weaknesses** However, the following aspects of the paper can be improved: 1. Clarifying how to calculate $H$ and its computational cost seems necessary (see Question 1). 2. The proposed method introduces a number of hyper-parameters. Specifying the approach of determining these values should be required (see Questions 2 and 3). Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: **Questions** 1. The computation of $H(\mathbf{x}, \mathbf{y})$ plays a critical role in the proposed objective function and the resampling-by-compatibility technique. However, when dealing with a large value of $K$, it can become computationally intensive due to the calculation of the Jensen-Shannon (JS) divergence between two densities specifies in Eq. (4). Could the authors provide the detailed information regarding the computational cost of their training method? 2. It seems like that the optimization process of Eq. (6) may be unstable as the objective maximizes both $u_{\omega}(\mathbf{x})$ and $v_{\omega}(\mathbf{y})$ during training. In addition, it is observed that the learning rates are set to very small values (e.g., 1e-5 and 1e-6) throughout the experiments. Could the authors explain why the learning rates are set to such extremely small values? Does $\omega$ stably converge to its optimal value? 3. The selection of the regularization factor $\epsilon$ seems to be a crucial aspect in Eqs. (5) and (6). Could the authors elaborate on how to determine the regularization factor $\epsilon$? Is the training stability and evaluation accuracies of $u_{\omega}, v_{\omega}$ sensitive to the value of this hyper-parameter? 4. The experimental results reveal a significant performance difference between SCONES and OTCS. To further understand the factors contributing to this discrepancy, it would be helpful to clarify whether the model architecture used in SCONES is the same as that adopted in OTCS. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 3 good Contribution: 4 excellent Limitations: **Suggestion** The illustration in Figure 1 lacks some crucial information. To improve clarity and provide a more comprehensive understanding of the proposed approach, the reviewer recommends including the 2-stage training procedure with their respective objective functions in the figure. Additionally, depicting the sampling process, including the SDE for conditional generation, would further enhance the visual representation. **Minor Error** There is a typo in line 108 (i.e., $R^s_\mathbf{y} \to R^t_\mathbf{y}$). Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the insightful comments and suggestions on our paper. **Q1: Computational cost of our training method.** In our experiments, considering that the number of paired data could be small in real-world applications, we set $K$ to 3 for animal images and 10 for digits. We will report the computational time cost of our training process below. Note that for large $K$, we may calculate the relation in Eq. (4) using mini-batches of the paired data when training potentials $u_{\omega}$ and $v_{\omega}$. As illustrated in Algorithm 2 in the Appendix A.2, our method consists of three processes in training:(1) *learning the potentials $u_{\omega}$ & $v_{\omega}$*, (2) *computing $H$ & storing the target sample indexes with non-zero $H (H>0.001)$ for each source sample*, and (3) *training the score-based model $s_{\theta}$*. We report the computational time cost of these three processes in the following Table r4-1. Table r4-1: computational time cost of training processes. |Dataset|Learning $u_{\omega}$ & $v_{\omega}$ (30w steps)|Computing & storing $H$|Training $s_{\theta}$ (60w steps)| |:-:|:-:|:-:|:-:| |CelebA|3.5 hours|0.5 hours|5 days| |Animal|2.0 hours|0.05 hours|5 days| From Table r4-1, we can see that (1) learning $u_{\omega}$ & $v_{\omega}$ and (2) computing & storing $H$ takes no more than 4 hours. Similarly to the other diffusion approaches, (3) training our score-based model $s_{\theta}$ takes a few days. **Computational time of each operation in a single step of training $s_{\theta}$**. In each step of training the score-based model $s_{\theta}$, we sequentially (1) sampling the index of target sample with probability proportional to $H$ for a randomly selected source sample index *(sampling index)*, then (2) load corresponding images *(loading image)*, and (3) finally feed data to network and update model parameters *(updating network)*. Compared with the training of score-based model for paired setting, our training additionally contains the operation of sampling index . From Table r4-2, we can see that sampling index takes much less time than updating network. Table r4-2: computational time of operations in a single step of training $s_{\theta}$ on Animal dataset. |Sampling index|Loading image|Updating network| |:-:|:-:|:-:| |0.0005 seconds|0.01 seconds|0.7 seconds| We will include the computational time cost in Appendix D, and cite them in Sect. 5.3. **Q2: Why the learning rates are set to such extremely small values? Does $\omega$ stably converge to its optimal value?** We show the objective function (Eq. (6)) in training in **Figs. r-1(a-b) of the uploaded pdf in the "global response"**. We can see that the objective function first increases and then converges, under learning rates 1e-5 and 1e-6. We notice that different $u_{\omega}$ and $v_{\omega}$ may yield the same $H$, (e.g., $u_{\omega}(\mathbf{x})+c$ & $v_{\omega}(\mathbf{y})-c$ yield the same $H(\mathbf{x},\mathbf{y})$ as $u_{\omega}(\mathbf{x})$ & $v_{\omega}(\mathbf{y})$, as in Eq. (7)). We then show the relative change of $H$ in training in Fig. r-1(c\). We can see that the relative difference of $H$ first decreases and fluctuates near to zero, which may be because the optimization is based on approximated gradients over mini-batch. The $\frac{1}{\epsilon}$ ($\epsilon=$ 1e-5 or 1e-7 in experiments) in Eq. (6) may yield large gradients. We then choose a small learning rate to stabilize the training. These results will be included in Appendix D, and cited in Sect. 5.3. **Q3: Could the authors elaborate on how to determine the regularization factor $\epsilon$? Is the training stability and evaluation accuracies of $u_{\omega},v_{\omega}$ sensitive to the value of this hyper-parameter?** To better approach the original OT in Eqs. (2-3) by the $L_2$-regularized OT in Eq. (5) so that the OT guidance could be better achieved, the $\epsilon$ should be small. However, due to the term $\frac{1}{\epsilon}$ in the objective function in Eq. (6), smaller $\epsilon$ may suffer from numerical issues in training. As a balance, we empirically choose a $\epsilon$ from candidate values {1e-5, 1e-6, 1e-7} such that the training is more stable. We show the objective function curves under varying $\epsilon$ in **Fig. r-2 in the uploaded pdf**. The training curves seem to be stable in general. We have also reported the results with varying $\epsilon$ in Table A-1 in Appendix D.3. From Table A-1, we can see that FID ranges in [13.68, 14.56] (which seems to be stable) for $\epsilon$ in [1e-7,1e-3]. The Acc (reflecting how well the OT guidance is imposed) is {95.11,96.00,96.44,90.22,77.78} for $\epsilon$ in {1e-7,1e-6,1e-5,1e-4,1e-3}. We can see that Acc is similar for $\epsilon$ in {1e-7, 1e-6,1e-5}, and decreases as $\epsilon$ increases from 1e-5 to 1e-3. These results will be included in Appendix D, and cited in Sect. 5.3. **Q4: Clarifying whether the model architecture used in SCONES is the same as that adopted in OTCS.** Thanks for this suggestion. We clarify that for fair comparisons, SCONES and OTCS utilize the same architecture of the score-based model (excepting that OTCS takes an additional block for conditioning on source data), the same trained potentials $u_{\hat{\omega}}$ & $v_{\hat{\omega}}$. This clarification will be included in the last paragraph in Sect. 4. **Q5: Revision of Fig. 1.** Thanks for this suggestion. As suggested, we will add two dotted frames in Fig. 1 for splitting the workflow into two stages, of which the titles are respectively "Stage I: $\min_{\omega} \mathcal{F}\_{\rm OT}(u_{\omega},v_{\omega})$" and "Stage II: $\min_{\theta}\mathcal{J}\_{\rm CDSM}(\theta)$". We use $\mathcal{F}_{\rm OT}(u,v)$ to denote the objective function of $L_2$-regularized OT in Eq. (6) for convenience. Meanwhile, we will add the equations of forward and reverse SDEs to Fig. 1. **Q6: About the typo.** Thanks for this suggestion. We will correct it in the revised paper. --- Rebuttal Comment 1.1: Title: Response to the Authors Comment: I appreciate the comprehensive response provided by the authors, which fully addressed my questions. --- Reply to Comment 1.1.1: Title: Thanks to Reviewer Tt9x Comment: Thanks again for the valuable review, enabling us to enhance our paper.
Summary: This paper studied the conditional score-based diffusion model (SBDM), which is an important topic in current machine learning research, and has great potential for real-world application. To overcome the limits in existing methodology, i.e., requirements of paired data and lacks methodology for conditional model, this paper proposes an optimal transport (OT) guided diffusion model, where the transport plan/coupling serves as: 1) a reasonable proxy for the conditional relation when paired data are inaccessible; 2) guidance and information for implementing the conditional score-based diffusion process. Theoretical results are derived to provide an intuitive understanding of the proposed method, and comprehensive experiments are conducted to validate the empirical efficiency. Strengths: + The paper is well-written and easy to follow, where the problems, motivations, and solutions are clearly stated. + An OT-guided diffusion model is proposed, which addresses the main challenges in existing research. + Theoretical results guarantee the interpretability, learnability, and bounded property of the OT-guided model. + Significant empirical results in I2I translation application. Weaknesses: **1.** The justifications for OT-guided model could be enhanced. In Eq. (9) and Section 3.2, the OT-guided model is defined, where the difference between $\mathcal{J} _{DSM}$ and $\mathcal{J} _{CDSM}$ is the “weights” $H$ and $\mathcal{C}$. The authors claim that $H$ and $\mathcal{C}$ can be considered as soft and hard relationships, while some further discussions are highly expected, e.g., is the model with soft weight $H$ still a diffusion model? **2.** Based on the question above, can the hard and soft relationships be analyzed under empirical scenarios? For example, does $H$ or the induced coupling $\pi$ have sparse (hard) or dense (soft) property? **3.** The $a_+$ operation in $L_2$-regularized (i.e., Eq. (6)) seems to be redundant. Since there is already a square operation in $u(x)+v(y)-\xi (x,y)$, the values of function are strictly non-negative. **4.** About clarity. (1) Abbreviations are used before giving definition, e.g., SDE. (2) The main body and appendix are generally independent; thus, some assumptions should be clarified in the main body, e.g., assumptions in Thm. 2. **5.** The limitations of the proposed method should be discussed, e.g., under what scenario will the OT fail to capture the relation information, and how the guidance of OT affects the model performance? Some brief discussion and conclusion are highly expected. ---- --**Post-Rebuttal Update**-- After checking the responses by authors, the concerns are addressed and overall quality is improved, including: clarity on writing and limitation (W4, W5), mathematical rigor on diffusion model and function property (W1, W3), in-depth analysis on the property of coupling under different space $H$. Considering the response and other reviews, I agree that this paper is technically solid and has certain merits. Thus, I would like to raise the score and hope the authors will carefully incorporate the reviewers' suggestions and their responses. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: I would like the authors to address the concern in **weaknesses**. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: The limitations and broader impacts are not discussed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the insightful comments and suggestions on our paper. **Q1: The justifications for OT-guided model could be enhanced. In Eq. (9) and Section 3.2, the OT-guided model is defined, where the difference between $J_{DSM}$ and $J_{CDSM}$ is the “weights” $\mathcal{C}$ and $H$. The authors claim that $H$ and $\mathcal{C}$ can be considered as soft and hard relationships, while some further discussions are highly expected, e.g., is the model with soft weight still a diffusion model?** We replace $\mathcal{C}$ in $J_{ DSM}$ in Eq. (8) by $H$ to develop the formulation $J_{CDSM}$ in Eq. (9) to train the conditional score-based model $s_{\theta}$ for unpaired or partially paired settings. We next analyze that such an OT-guided model is still a diffusion model. Specifically, as we show theorem 1 in Sect. 3.5, $J_{CDSM}$ shares the same gradient w.r.t. $\theta$ with $J\_{CSM}$ defined in theorem 1. $J_{CSM}$ corresponds to the following diffusion model. Given a condition data $\mathbf{x}$, we take the learned conditional transport plan $\hat{\pi}(\mathbf{y}|\mathbf{x})=H(\mathbf{x},\mathbf{y})q(\mathbf{y})$ as initial distribution $p_0(\mathbf{y}_0|\mathbf{x})$ to generate a target sample $\mathbf{y}_0$. We then perform the forward SDE $d\mathbf{y}=f(\mathbf{y},t)dt+g(t)d\mathbf{w}$ to obtain noisy sample $\mathbf{y}$ at time $t$, and train the conditional score-based model $s\_{\theta}(\mathbf{y};\mathbf{x},t)$ to approximate $\nabla\log p_t(\mathbf{y}|\mathbf{x})$ by minimizing $J\_{CSM}$ (or equivalently $J\_{CDSM}$). The corresponding reverse SDE is $d\mathbf{y}=[f(\mathbf{y},t)-g(t)^2 \nabla\log p_t(\mathbf{y}|\mathbf{x})]dt+g(t)d\mathbf{w}$$\approx [f(\mathbf{y},t)-g(t)^2 s\_{\theta}(\mathbf{y};\mathbf{x},t)]dt+g(t)d\mathbf{w}$, which is consistent with our inference process in Eq. (10). We will summarize these discussions and include them in Sects. 3.2 and 3.5. **Q2: Based on the question above, can the hard and soft relationships be analyzed under empirical scenarios? For example,does $H$ or the induced coupling $\pi$ have sparse (hard) or dense (soft) property?** To study how sparse $H$ is, for each target image $\mathbf{y}$, we denote the number of source image $\mathbf{x}$ with "non-zero $H$" as $n_{\mathbf{y}}$ (i.e., $n_{\mathbf{y}}=|\\{\mathbf{x}:H(\mathbf{x},\mathbf{y})>0.001\\}|$, considering numerical issues) in CelebA dataset. The histogram of $n_{\mathbf{y}}$ is shown in the Table r3-1. Table r3-1: histogram of number $n_{\mathbf{y}}$ of source images with "non-zero $H$" for target image $\mathbf{y}$, where the total numbers of both source images and target images are 80k. |Bins for $n_{\mathbf{y}}$|[0,10)|[10,20)|[20,50)|[50,100)|[100,600)|[600,80k]| |:-:|:-:|:-:|:-:|:-:|:-:|:-:| |Frequency|59600|8064|8468|3537|1716|0| We can see from Table r3-1 that all the target images are with $n_{\mathbf{y}}\leq 600$, and more than 70% of target images are with $n_{\mathbf{y}}\leq 10$. This implies that for each $\mathbf{y}$ in more than 70% target images, there are no more than 10 among 80K source images $\mathbf{x}$ satisfying $H(\mathbf{x},\mathbf{y})>0.001$. So $H$ is sparse to some extent. We also count the number of target images with $n_{\mathbf{y}}=1$ ($n_{\mathbf{y}}=1$ means that each target image is paired with one source image), which is 8579 (around 10%). These empirical results indicate that $H$ may provide a ''soft" coupling relationship, since there may exist multiple source images with "non-zero $H$" for most target images. We will include this empirical analysis in Appendix D (Table r3-1 will be displayed as a bar plot), and cite it in Sect. 3.2 of the revised paper. **Q3: The $a_+$operation in $L_2$-regularized OT (i.e., Eq. (6)) seems to be redundant. Since there is already a square operation in $u(\mathbf{x})+v(\mathbf{y})-\xi(\mathbf{x},\mathbf{y})$, the values of function are strictly non-negative.** Sorry for this misunderstanding. In Eq. (6), the symbol $(\cdot)\_+^2$ means that we first perform the $a_+$ operation and then the square operation. We will use $[(\cdot)_+]^2$ in the revised paper. **Q4: About clarity. (1) Abbreviations are used before giving definition, e.g., SDE. (2) The main body and appendix are generally independent; thus, some assumptions should be clarified in the main body, e.g., assumptions in Thm. 2.** Thanks for the suggestions. (1) We will use "stochastic differential equation (SDE)" to replace "SDE" Sect. 2.1 where "SDE" is used for the first time, in the revised paper. (2) In Appendix B.3.1, assumptions (1)-(8) are based on assumptions in Kwon et al. (NeurIPS,2022), most of which are used to ensure the existence of solution of SDEs. Assumption (9) states that the Lagrange function is $\kappa$-strongly convex in $L_1$-norm w.r.t. $\pi$, which is true for some $\kappa$ as shown in Daniels et al. (NeurIPS, 2021). As suggested, we will include assumption (9) in the second paragraph in Sect. 4 in the revised paper. Meanwhile, we will cite Kwon et al. (NeurIPS, 2022) and mention assumptions (1)-(8) in this paragraph. **Q5: Some brief discussion and conclusion on limitations are expected.** Thanks for the suggestion. When applying our approach, the cost function should be determined first. In our experiments, we simply choose the squared $L_2$-distance in image space for unpaired super-resolution and in feature space for semi-paired I2I (please refer to Q2 for reviewer H2CC for the details), and achieved satisfactory performance. However, if more domain knowledge is employed to define the cost function, the performance may be improved. Meanwhile, if the number of target data is small, the generation ability of our trained model may be limited. As suggested, we will include the discussion on limitations in the revised paper following the conclusion. --- Rebuttal Comment 1.1: Title: Concerns are addressed. Comment: I thank the authors for the detailed justifications, which improve the clarity, mathematical rigor, and empirical validation. After checking other reviews, I think this paper is technically solid and the responses have addressed my concerns. Thus, I would like to raise the score and hope the authors will carefully incorporate the reviewers' suggestions and their responses. --- Reply to Comment 1.1.1: Title: Thanks to Reviewer 63Ma Comment: We thank the reviewer again for the valuable review, which enables us to enhance the paper. As suggested, we will carefully revise the paper according to the suggestions of reviewers and the responses.
Summary: This paper propose Optimal Transport-guided Conditional Score-based diffusion model (OTCS), which is a novel approach in training unpaired / semi-paired image-to-image translation network. The proposed approach leverage unsupervised / semi-supervised optimal transport to earn coupling between images and use such information to guide during training. The proposed OT-guided Conditional Denoting Score Matching objective recovers original Conditional Denoising Score Matching objective if the transport plan is accurate. Through experiments on unpaired or semi-paired image-to-image translation tasks, the proposed OTCS achieves state-of-the-art result. Strengths: The proposed approach is clearly described and present a novel idea to overcome the requirement of paired images in training image-to-image translation networks. The author successfully fuse optimal transport into Denoising Score Matching objective with theoretical guarantee. Also, to dodge the inexistent of target samples in batch training, the author propose Resampling-by-compatibility, which is a novel component of OTCS that aims to select good target sample during training. Overall, the paper is written clearly and the experimental results strongly demonstrate the effectiveness of the proposed method. Weaknesses: Since the method requires estimation of compatibility function to solve optimal transport, it requires additional computational cost (e.g., learning u and v in solving dual problem). Also, the resampling-by-compatibility include additional sampling stage. Lastly, the experimental setup is somewhat synthetic that there always exist good target in the dataset. It would be nice if there is more real-world unpaired / semi-paired image-to-image translation task. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Questions 1. What if there is no good target for some source data? Then is the compatibility function becomes zero? In other words, how good is the compatibility function is calibrated? 2. I think the proposed approach can be generalized into training of text-to-image score-based models. For example, given a bunch of image-caption pairs and unpaired image-only dataset, the proposed OTCS can be used to further improve the generative quality of diffusion models. Do the authors have ever tried to do such kind of task? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Explicitly confirming the limitation of this work can enrich the paper. For example, what should be considered when considering unpaired / semi-paired image-to-image translation task on wild or when do they fail (e.g., when target dataset is not sufficient). Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the insightful comments and suggestions on our paper. **Q1: Computational cost of the training method.** As illustrated in Algorithm 2 in the Appendix A.2, our method consists of three processes in training:(1) *learning the potentials $u_{\omega}$ & $v_{\omega}$*, (2) *computing $H$ & storing the target sample indexes with non-zero $H (H>0.001)$ for each source sample*, and (3) *training the score-based model $s_{\theta}$*. We report the computational time cost of these three processes in the following Table r2-1. Table r2-1: computational time cost of training processes. |Dataset|Learning $u_{\omega}$ & $v_{\omega}$ (30w steps)|Computing & storing $H$|Training $s_{\theta}$ (60w steps)| |:-:|:-:|:--:|:--:| |CelebA|3.5 hours|0.5 hours|5 days| |Animal|2.0 hours|0.05 hours|5 days| From Table r2-1, we can see that (1) learning $u_{\omega}$ & $v_{\omega}$ and (2) computing & storing $H$ takes no more than 4 hours. Similar to the other diffusion approaches, (3) training our score-based model $s_{\theta}$ takes a few days. **Computational time of each operation in a single step of training $s_{\theta}$**. In each step of training the score-based model $s_{\theta}$, we sequentially (1) sampling the index of target sample with probability proportional to $H$ for a randomly selected source sample index *(sampling index)*, then (2) load corresponding images *(loading image)*, and (3) finally feed data to network and update model parameters *(updating network)*. Compared with the training of conditional score-based model for paired setting, our training additionally contains the operation of sampling index. From Table r2-2, we can see that sampling index takes much less time than updating network. Table r2-2: computational time of operations in a single step of training $s_{\theta}$ on Animal dataset. | Sampling index| Loading image| Updating network| | :-: | :-: | :-: | | 0.0005 seconds|0.01 seconds|0.7 seconds| We will include the computational time cost in Appendix D, and cite them in Sect. 5.3. **Q2: The experimental setup is somewhat synthetic that there always exist good targets in the dataset. It would be nice if there is more real-world unpaired / semi-paired image-to-image translation tasks. What if there is no good target for some source data? Then is the compatibility function becomes zero? In other words, how good the compatibility function is calibrated?** We clarify that there does not exist corresponding paired target data for each source image in the training datasets, for all our experiments. For example, in the unpaired super-resolution on CelebA, the ground truth high-resolution image for each low-resolution image is not in the training dataset. To figure out what happens to the compatibility function $H$ when there is no good target data, we conduct the following experiments. Firstly, we count the number of source samples satisfying that there is no target sample such that $H>0.001$, in CelebA dataset. We find that **25.9%** of source samples meets such condition. Note that the other source samples are often with $H$ larger than 1000 on some target samples (the large $H$ may be because $\epsilon$ is 1e-7 in Eq. (7)). Secondly, we add noisy images to the source dataset to train the potentials $u_{\omega}$ and $v_{\omega}$, and count the ratio of noisy images satisfying $H<0.001$ on all target samples. The results are reported in Table r2-3. The noisy images are generated from the standard normal distribution and with the same shape as the source images. Table r2-3: ratio of noisy images with $H<0.001$ when adding varying numbers of noisy images to the source dataset. |Number of noisy images : Number of clean images|0.1 : 1|0.2 : 1|0.3 : 1|0.4 : 1|0.5 : 1| |-|:-:|:-:|:-:|:-:|:-:| |**Ratio of noisy images assigned with $H<0.001$**|89.3%|85.6%|83.9%|81.6%|80.2%| It can be seen that more than 80% of noisy images that are with no good target data are assigned with near-to-zero $H$ ($H<0.001$), when the ratio of numbers of noisy images to clean images is in [0.1,0.5]. In our experiments, we have used real-world images in both unpaired super-resolution and semi-paired I2I. In unpaired super-resolution, we follow the super-resolution methods [9] to downsample the high-resolution images to obtain low-resolution images. As suggested, we will investigate more real-world applications in our future work. We will include the above results and discussions in Appendix D and cite them in Sect. 5.3 in the revised paper. **Q3: Regarding to experiments on text-to-image translation.** Thanks for the suggestion. The text-to-image translation approaches (Saharia et al., NeurIPS, 2022; Rombach et al., CVPR, 2022) often embed the caption into the latent space, on which the score-based model is conditioned. These methods leverage large-scale paired text-image datasets, such as COCO, to train the score-based model, achieving satisfying performance (e.g., Stable Diffusion and Saharia et al., NeurIPS, 2022). We do believe it is interesting to investigate the performance of our approach when given only a small fraction of text-image pairs. Since training the score-based models on large-scale text-image datasets is time-consuming for the limited time in the rebuttal, we leave it for further investigation. **Q4: Explicitly confirming limitations.** Thanks for the suggestion. The cost function should be determined first when using our method. In experiments, we simply choose the squared $L_2$-distance in image space for unpaired super-resolution and in feature space for semi-paired I2I, achieving satisfactory performance. However, if more domain knowledge is employed to define the cost function, the performance may be improved. Meanwhile, if the number of target data is small, the generation ability of our trained model may be limited. As suggested, we will include these limitations in the revised paper. --- Rebuttal Comment 1.1: Title: Response to the rebuttal Comment: I sincerely appreciate the authors' effort on rebuttal despite the short period. I am satisfied with the authors' response to the questions and concerns that I mentioned, and I suggest acceptance. --- Reply to Comment 1.1.1: Title: Thanks to Reviewer w9nv Comment: We thank the reviewer again for the valuable review. We will revise our paper as discussed in the response.
Summary: This paper proposes to use the Optimal Transport to guide the training of the conditional score-based diffusion model. The method first computes the coupling of the source and target data and then includes the coupling into the conditional score-based model objective. CelebA face super-resolution, AFHQ animal, and MNIST digits translation experiments show the proposed method performs better than state-of-the-art methods. Strengths: 1. The idea that adopts Optimal Transport for training Conditional Score-Based Diffusion Model (SBDM) seems novel. For unpaired or semi-paired source and target data, it is natural to use OT to find the couplings and then train the SBDM. 2. The authors reformulated the original conditional SBDM such that it can take the coupling between the source and target data points (Proposition 1). Based on the reformulation, the authors proposed the OT-guided conditional denoising score-matching objective in Eq. 9. 3. The authors analyzed that the proposed method approximately generates samples from the conditional transport plan and gave the upper bound of the expected Wasserstein distance between the distribution of generated samples and the optimal transport plan. 4. Unpaired Super-Resolution results on the CelebA dataset show that the method can restore degenerated face images. The FID and SSIM show that the method significantly outperforms other OT, Flow, and diffusion methods. Semi-paired I2I translation results show that the proposed method performs I2I well and outperforms other methods. Weaknesses: The OT part is computed first, and the couplings are fixed before training the diffusion model. So, the whole training pipeline is not end-to-end, which may yield sub-optimal results. It's not clear what cost function, $c(\cdot, \cdot)$ in Eq. 4, is used to compute the source image and target image transport cost. Also, it's not clear where is the cost function applied. Is it applied in the image space? From Fig. 4 (a), it is hard to tell why the translated animals are good. From my understanding, the method should generate the translated animal with the same pose as the animal in the source image. Technical Quality: 3 good Clarity: 3 good Questions for Authors: The authors use ”condition data“ as source data in I2I. In general, we refer to "source data" instead of condition data. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: If the transport cost is applied to the image space, then the OT may not make much sense because the cost computed in the image space may not reflect the accurate semantic distance. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the insightful comments and suggestions on our paper. **Q1: The OT part is computed first, and the couplings are fixed before training the diffusion model. So, the whole training pipeline is not end-to-end, which may yield sub-optimal results.** In this paper, we aim to use OT to guide the learning of conditional score-based model for applications with unpaired or partially paired training dataset. To tackle the lack of coupling relationship for unpaired or partially paired settings, we use $L_2$-regularized OT to build the coupling relationship between source and target data, as mentioned by the reviewer in Strengths 1. The coupling relationship in $L_2$-regularized OT is modeled in the compatibility function $H(\mathbf{x},\mathbf{y})$ as shown in Eq. (7). To develop the formulation of conditional SBDM for unpaired or partially paired settings, as mentioned in the Strengths 2 by the reviewer, we provide a reformulation in Eq. (8) of conditional SBDM for paired setting, in which the coupling relationship of paired source/condition data $\mathbf{x}$ and target data $\mathbf{y}$ is explicitly modeled by function $\mathcal{C}(\mathbf{x},\mathbf{y})$. This naturally motivates us to replace $\mathcal{C}(\mathbf{x},\mathbf{y})$ in Eq. (8) by $H(\mathbf{x},\mathbf{y})$ to extend this reformulation of paired setting (Eq. (8)) to unpaired or partially paired settings (Eq. (9)). Using Eq. (9) to train the conditional score-based model $s_{\theta}$, the guidance of OT is imposed. As demonstrated by the experimental results in Table 1, our approach is superior to the compared state-of-the-art approaches in unpaired super-resolution and semi-paired image-to-image translation experiments. Since the objective for learning compatibility function $H$ (i.e., by learning $u_{\omega},v_{\omega}$) and conditional score-based model $s_{\theta}$ is different, it is non-trivial to learn by end-to-end training. We will strive toward this goal in the future. **Q2: It's not clear what cost function, $c(\cdot,\cdot)$ in Eq. (4), is used to compute the source image and target image transport cost. Also, it's not clear where is the cost function applied. Is it applied in the image space?** Sorry for missing the specification of cost function in the main body of paper (we have included it in Appendix C). For the unpaired super-resolution task in Sect. 5.1, we should preserve the local structure and image details of low-resolution images in the transported high-resolution counterparts. We then take the squared $L_2$-distance in image space as the cost function, as discussed in Appendix C.2. For the semi-paired image-to-image translation task in Sect. 5.2, we aim to transport the source image to the desired classes in the target domain, guided by the given cross-domain image pairs with desired paired class labels. To do this, as discussed in Appendix C.3, we map the images to the semantic feature spaces using a pre-trained feature extractor. The cost function is taken as the squared $L_2$-distance in the semantic feature space. For animal images, the feature extractor is taken as the CLIP image encoder. For digits, we train an auto-encoder on each domain and take the encoder part as the feature extractor. Due to space limits, we have included these details in Appendix C. Since the paper and appendix are generally independent, we will clarify the specification of cost function in the first paragraphs in Sects. 5.1 and 5.2 in the revised paper. **Q3: From Fig. 4 (a), it is hard to tell why the translated animals are good. From my understanding, the method should generate the translated animal with the same pose as the animal in the source image.** We clarify that the setup of our considered semi-paired I2I is different from the setup of unpaired I2I [15-18,35,36]. Our goal is to transport the source images to the desired classes guided by a few paired images with desired paired class labels. While the unpaired I2I [15-18,35,36] aims to preserve the pose of source images when translated to target domain. As implied by the experimental results in Table 1, the recent unpaired I2I methods [17,35,36] and the semi-paired I2I method [14] struggle to translate source images to the desired classes. To employ our approach to tackle the semi-paired I2I, as discussed in Q2, we take the squared $L_2$-distance in the semantic feature space as the cost function. We use the image encoder of CLIP to extract the image features, which may mainly contain the class information of the image, because the CLIP features are good for classification as shown in the paper of CLIP. The experimental results in Table 1 and Fig. 4 show better performance achieved by our approach. It is interesting to investigate translating source images to desired classes and meanwhile preserving the poses. For this purpose, we may additionally utilize the features encoding pose information to compute the cost function. We leave this as our future work. We will summarize this clarification and include it in Sect. 5.3. **Q4: The authors use "condition data'' as source data in I2I. In general, we refer to "source data'' instead of condition data.** Thanks for this suggestion. As suggested, we will use "source data'' in the descriptions of I2I and experiments. --- Rebuttal Comment 1.1: Title: Reply to rebuttal Comment: Thank the authors for the response! Most of my concerns are addressed. I also checked other reviews. The paper is technically solid and achieves good experimental results. The authors need to revise the paper as promised in the response. --- Reply to Comment 1.1.1: Title: Thanks to Reviewer H2CC Comment: Thanks again for the valuable review. We will carefully revise the paper as promised in the responses.
Rebuttal 1: Rebuttal: Dear ACs and reviewers, Thanks for handling our paper. We have responded to the comments of each reviewer individually and discussed how to revise our paper according to the comments and suggestions. Meanwhile, we attached a pdf file containing the figures as support material for the rebuttal. Best wishes, Authors Pdf: /pdf/bf9a2a2422442d1bb7c4d9a2bf4450fdfc293afb.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
The Behavior and Convergence of Local Bayesian Optimization
Accept (spotlight)
Summary: Local Bayesian optimization has been popular high-dimensional optimization methods, which achieves the state-of-the-art empirical performance over various benchmarks. This paper investigates the behavior of local Bayesian optimization (BO) methods. Under a prototype algorithm, the authors empirically demonstrate superior performance of the stationary points found by local BO compared to vanilla BO and grid search. Under the assumption that the function generating parameter are known, they theoretically and experimentally analyze the convergence rate of local BO methods. Strengths: 1. I think the study problem in this paper is important and the theoretical experiment results partially explains the good performance of local BO. 2. The theoretical analysis is easy to follow, and gives insight of how to design local BO hyperparameters according to the problem dimension. Weaknesses: 1. The prototype local BO algorithm samples points to reduce the gradient estimation unceratinty, and move search center according to the estimated mean gradient, which may not be representative of all local BO methods such as trust region Bayesian optimization (TuRBO) [1]. 2. The assumptions in theoretical analysis may not hold in many real-world problems. [1] Eriksson, David, et al. "Scalable global optimization via local Bayesian optimization." Advances in neural information processing systems 32 (2019). Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. Many experiment in this paper use GP sample path as the objective function, with a dimension up to 100 (Figure 3). How to sample a function from GP under such high-dimensional input space? Any discretization is used during the function sampling? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors have addressed the limilations. I did not find potential negative societal impact of this paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your comments. We provide our response as follows. **Comment:** The assumptions in theoretical analysis may not hold in many real-world problems. We definitely agree that these assumptions may not hold in real-world problems – most real world functions are not GP sample paths. The assumptions we use (RKHS/GP sample) are the most commonly used assumptions in the BO theory literature because proving convergence in the black-box setting with arbitrary model misspecification is generally a pretty daunting task. With that said, interestingly, there actually is hope for relaxed assumptions for local BO. For local BO, we don’t need the GP to be a “potentially reasonable” surrogate model of the function everywhere in the input space. Intuitively, we only need sufficient smoothness to be able to estimate the gradient at the current iterate $x_t$ – all other areas of the input space are irrelevant at iteration t. It’s interesting future work to consider what the weakest assumptions are that enable provable GP-based gradient estimation. **Comment:** How to sample a function from GP under such a high-dimensional input space? Any discretization is used during the function sampling? We did not use discretization in our experiment -– see Line 99. We adapted the pathwise sampling technique of Wilson et al. (2021) as implemented in BoTorch, but using exact conditional calculations rather than variational / inducing point conditional calculations (since our goal was not improved asymptotic complexity). --- Rebuttal Comment 1.1: Comment: Thanks for your clarification. I think this paper inspires me a lot.
Summary: Local Bayesian optimization methods are widely used to cope with the curse of dimensionality when optimizing high-dimensional black-box functions. Although these methods have very good performance empirically, virtually all theoretical results focus on the global optimization setting by proving bounds on the optimality gap. This paper shows that under typical assumptions a particular variant of local Bayesian optimization (GIBO) provably converges to a stationary point in polynomial time. Strengths: I greatly enjoyed this paper. It is well-written and easy to follow, and answers important questions with proof techniques that are novel to me. I am very happy to see that the authors included some empirical studies to verify the integrity of their results. Weaknesses: This work is ready for publication. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: - based on the description in line 113 it seems the x axes in Figure 1 are mislabeled, is that correct? - I found the following statement on line 220 a bit vague: "However, because observations of f are made with noise, the noise inherit too [sic] both evaluations of f is amplified by 1/2h as h -> 0." Can you please elaborate? - I found the second paragraph of the discussion intriguing. Do you think the noise in the BayesOpt gradient estimate may play a similar role as the minibatch gradient noise in SGD? My understanding of the argument is minibatch gradient noise tends to cause SGD to find wider basins in the loss surface, which some have argued is linked to better generalization. In this context we aren't concerned with generalization per se, but perhaps one could argue that wider basins of f are preferable since they will be more forgiving if the estimate of the stationary point is slightly off. - in practice it is quite common to estimate the GP surrogate hyperparameters through the log marginal likelihood, can you comment on how this might affect your analysis and empirical behavior? - how might one start thinking about "convergence" in the multi-objective case? Could we prove that local BayesOpt converges to a locally non-dominated point? - although this paper did not propose GIBO, can you comment on its numerical stability in practice? In my experience look-ahead acquisition functions tend to exhibit poor numerical behavior, which I imagine is only amplified by introducing an additional derivative operator. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 4 excellent Limitations: I see two primary limitations - the analysis is restricted to GIBO. It would be interesting to know if any similar arguments can be made for TurBO [1], which is quite widely used. - the analysis relies on fairly strong (albeit common) assumptions on the nature of f and the specification of the surrogate. It seems plausible to me that (for example) BayesOpt could find a stationary point even if the surrogate is misspecified, assuming f is not too pathological. [1] Eriksson, David, et al. "Scalable global optimization via local Bayesian optimization." Advances in neural information processing systems 32 (2019). Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your comments. We provide our response as follows. **Comment:** The x-axis of Figure 1. Thanks for spotting this typo! Indeed, the x-axis in Figure 1 is the indices of the dimension list {1, 5, 10, 20, 30, 50}. We will replot the x-axis of the figure. **Comment:** Line 220. We will rewrite this sentence in the revision. The central differencing scheme has $O(h^2)$ truncation error, and thus $\frac{\partial f}{\partial x} = \frac{1}{2h} \big( f(x + h) - f(x - h)\big) + O(h^2)$. To reduce the estimation error, we need a small $h$. However, the function evaluation is corrupted by a Gaussian in the presence of noise. The right hand side becomes $\frac{1}{2h} \big( f(x + h) - f(x - h) + 2 \epsilon\big) + O(h^2)$, where $\epsilon$ is Gaussian ($2\epsilon$ is due to two noisy function evaluations). When $h \to 0$, the denominator $\frac{1}{2h}$ blows up the noise. This illustrates that noise creates a sort of general hardness in estimating gradients, whether you’re using a GP or not. **Comment:** Second paragraph of discussion. Do you think the noise in the BayesOpt gradient estimate may play a similar role as the mini batch gradient noise in SGD? That’s an interesting question. We think the answer might be yes potentially. Besides the case you mentioned, there is another intuitive case. Due to the estimation error, local BO may be less likely to be trapped by saddle points (a particular type of bad stationary points). The intuition is that convergence to saddle points is unstable — a slight perturbation may let the algorithm find a decent direction and therefore escape the saddle point. Here we use the term *algorithmic bias* to broadly refer to this phenomenon: local BO algorithms (e.g. GIBO), are able to find good solutions (which is demonstrated by their strong empirical performance in previous research), despite the existence of potential bad local solutions/stationary points. It is not completely well-defined unfortunately but may be worth studying in the future. **Comment:** Estimating the hyperparameters through maximizing log likelihood. Our theoretical analysis assumes a kernel with fixed hyperparameters, which is relatively standard in the theoretical BO literature. Extending to unknown hyperparameters is an interesting future work. In practical settings, we should of course generally expect that estimating the hyperparameters improves the model fit, which typically in turn improves the convergence. The empirical performance of local BO methods in settings where hyperparameters are learned is of course well established in prior work. **Comment:** Convergence in the multi-objective case. It might be possible to extend our results in multi-objective settings. One of the intermediate steps in our proof is to show that the objective decreases in every iteration if the gradient estimation error is small (see the equations below Line 395). If we can show a similar property holds in the multi-objective setting (e.g., each iteration decreases all objectives), then it might be possible to prove convergence. Note that decreasing objectives locally is a very natural requirement for local BO algorithms. We think it is interesting to check if certain multi-objective BO algorithms satisfy this property (or propose new algorithms satisfying this property). **Comment:** Numerical stability of GIBO. Optimizing the acquisition function of GIBO is generally stable with good initialization. The resulting designs typically center around the current iterate $\mathbf{x}$ (e.g. Figure 4). In practice, however, updating the iterates $\mathbf{x}_{t + 1} = \mathbf{x}_t - \eta \mathbf{g}_t$ may require careful attention. The estimated gradient $\mathbf{g}_t$ is only an **approximation** of the true gradient. While in theory the approximation error can be bounded by our theorems, in practice the approximation error is affected by two additional complications: a) model misspecification and b) small batch size in the noisy setting. Thus, it is recommended to use small step sizes to avoid overshooting, or even line search strategies. Oftentimes normalizing the estimated gradient $\mathbf{g}_t$ is helpful (Muller et al., 2021). --- Rebuttal Comment 1.1: Title: Acknowledgement Comment: Thanks for your response, this is great work and I remain strongly in favor of acceptance.
Summary: The paper investigates the behavior and convergence property of the local Bayesian optimization approach, in particular, the method GIBO [16]. The paper provides the convergence rates of GIBO in both the noiseless and noisy settings. The paper also performs various empirical experiments to understand the tightness of the derived bounds. It also conducts some experiments to understand some other properties of local BO like multiple restarts and when the objective function is non-differentiable. Strengths: + The paper tackles an important and difficult problem, which is to perform theoretical analysis for the local BO approach + The paper writing is very clear and easy to understand. The paper is really technical; it has a lot of theory, but thanks to the clear writing, all the theoretical analysis is much easier to understand. + The theoretical analysis on the convergence rates of GIBO seems to be sound to me (based on my overall understanding regarding the theoretical analysis of BO – I couldn’t dig into the proof in a very detailed manner). + Some empirical experiments are also conducted to understand the tightness of the derived bounds. Weaknesses: + The paper actually focuses only on one of the local BO approaches (the GIBO method) and this approach is actually quite different compared to some other popular local BO approaches like TurBO. The GIBO approach is based on gradient descent, but the other local BO approach like TurBO is still based on BO. The paper should make very clear regarding the focus of the theoretical analysis. + Another weakness of the paper is that it still relies on the assumption that are used to analyze the global BO including the assumption that the objective function is a sample from a GP with known hyperparameters. Actually one of the main motivation of the local BO approach is based on the observation that the objective function can’t be model using one single global GP in the whole search domain. Therefore, the assumption of the theoretical analysis in this work seems to be quite strict, and therefore, the theoretical analysis (the bounds) may not shed too much light on the performance of the local BO approach. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Please answer my comments in the Weaknesses section, and the following additional questions: + In the experiments (Section 6.1), how are the hyperparameters of the bounds or the algorithms set (like \beta) ? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The paper seems to not have a dedicated section describing the limitations of the work presented in the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your comments. We provide our response as follows. **Comment:** The paper actually focuses only on one of the local BO approaches (the GIBO method). See the global response above. **Comment:** Sample path assumption with known hyperparameter. While this assumption is standard and widely used in the theoretical BO literature, there is also potential hope for relaxed assumptions for local BO. For local BO, we don’t need the GP to be a “potentially reasonable” surrogate model of the function everywhere in the input space. Intuitively, we only need sufficient smoothness to be able to estimate the gradient at the current iterate $x_t$ – all other areas of the input space are irrelevant at iteration t. It’s interesting future work to consider what the weakest assumptions are that enable provable GP-based gradient estimation. We plan to explore this possibility further in future work. Nevertheless, we believe that our results already reveal some advantages of local BO algorithms. Our convergence rates (for local stationary points) are **polynomial** both in the dimension $d$ and the number of samples $n$. This is in sharp contrast to any global BO algorithm, e.g. GP-UCB. In this sense, our results provide partial justifications for local BO algorithms and their fast convergence in practice (at least for a particular algorithm, GIBO). **Comment:** In the experiments (Section 6.1), how are the hyperparameters of the bounds or the algorithms set (like \beta) ? Figure 2 studies the tightness of the bound on the error function (Definition 2), which depends only on the kernel and not the objective function, and therefore makes no use of $\beta$. The kernel used in Figure 2 is a RBF kernel with a unit length scale, as in Theorem 3. The purpose of Figure 2 is to compare how quickly the posterior derivative variance can be decreased empirically to our theoretical upper bound on that rate. Since the posterior variance doesn’t depend on the actual labels collected (with fixed hyperparameters), and since the error function is measured at the origin with only the collected batch of data, the results in Figure 2 are independent of collected labels or any particular objective function. --- Rebuttal Comment 1.1: Title: Thank you for your response Comment: Dear authors, Thank you for your response. The reponse has addressed some of my concerns, however, I still think the assumption regarding the objective function being a sample from a GP with known hyperparameters is still quite strong when analyzing the local BO approach, so I still keep my current rating.
Summary: This paper gives theoretical considerations on local Bayesian optimization methods for black box optimization. In particular, for gradient-based local BO, when the objective function satisfies the smoothness assumption, the rate of convergence to the stationary point is derived for the noiseless and noisy cases, respectively. In the noiseless case, local solutions can be found at a much faster rate with respect to the input dimension d than with conventional global BO. Although the convergence rate is worse in the presence of observation noise than in the noiseless case, it returns to the noiseless rate in the limit of zero noise variance, suggesting that it is a reasonable bound. The tightness of the derived convergence rates is validated by numerical experiments. Strengths: - The rate of convergence of the gradient-based local BO algorithm to the local optimal solution is theoretically derived, and the rate is expected to be somewhat tight (empirically validated). - It is clear how the observational noise affects convergence in the local BO. Furthermore, by taking the limit of the noise variance parameter, the convergence rate connects the noiseless and noisy cases. - The convergence rate results for the noiseless case are given for both cases where the objective function is an element of the RKHS (in the main text) and a sample path of a Gaussian process (in the appendix). Weaknesses: - The local BO discussed here is a gradient method type algorithm called GIBO in particular, and approaches that construct the GP itself locally, such as TurBO [Eriksson+ 2019], have not been considered. - It would be nice to have a theory that connects convergence to a stationary point and regret analysis. In Section 3, the results suggest that local solutions are good enough compared to the global solution, and if this is a property that holds to some extent universally, then it seems reasonable to expect that the regrets of local search also have good properties (e.g., sublinear properties). Technical Quality: 3 good Clarity: 3 good Questions for Authors: - I think one advantage of local search is that it is compatible with parallel distributed processing (such as TurBO). Is it possible to extend the gradient-based local BO treated in this paper to parallel distributed processing as well? And if so, would some theory of convergence for such methods be obtained by a direct extension of the results derived in this paper? - Theorem 3 derives the upper bound of the error function for the squared exponential kernel. Can we easily obtain the upper bound of the error function for other kernels (e.g. Matern kernel)? - In Lemma 2 and line 226, is $E_{d, k, \sigma}(b_t, d)$ mistake for $E_{d, k, \sigma}(b_t)$? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors appear to have adequately addressed the limitations of the study and, where applicable, the potential adverse social consequences. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your comments. We provide our response as follows. **Comment:** It would be nice to have a theory that connects convergence to a stationary point and regret analysis. We agree this is an interesting question to convert our convergence rates to global regret bounds. However, this is actually a fairly challenging problem that would probably constitute a large technical lift in its own right. We’d need to analyze the regret of finding a sequence of stationary points using random restarts. Essentially, this is a question about the relationship between stationary points of GP sample paths and the global optimum. To the best of our knowledge, there is little research on this question, although some of the theories by Adler (1981) may be relevant. We think this is interesting future work. **Comment:** Is it possible to extend the gradient-based local BO treated in this paper to parallel distributed processing as well? First we note that, while it’s definitely not obvious, search direction based BO methods actually do effectively use batch evaluation already! Most of the evaluation budget of methods like GIBO and MPD is spent acquiring batches of data Z to learn about the gradient. Optimizing the acquisition function for Z can and often is done for relatively large (tens to hundreds) batches of points at a time. Thus, even the sequential single restart version of GIBO makes reasonably effective use of parallel evaluation capabilities. If additional parallelization is required, considering multiple restarts simultaneously is possible. **Comment:** Upper bound of the error function for other kernels. We do now have an additional result on the Matern kernel ($\nu = 2.5$). Interestingly, we are able to prove the Matern kernel has the same rate as the RBF kernel, i.e., $E_{d, k, \sigma}(b) = \mathcal{O}(\sigma d^{\frac32}b^{-\frac12})$. This new proof slightly adapts the techniques in the paper. The main new technique is to use a rational function upper bound on the posterior covariance trace for the Matern kernel, which allows us to analyze the error function by minimizing this rational function upper bound. We conjecture that our proof can be generalized to the **entire** Matern family (widely used in practice) and the rational quadratic kernel. Additionally, we remark that the error function we proposed in this paper is a quantity analogous to the information gain in global BO analysis — bounds on the error function for **any** kernels immediately result in convergence rates by our proof. **Comment:** In Lemma 2 and Line 226… Correct. Indeed, the dependency on the dimension is through the subscript, not the argument. Thanks for spotting this typo. --- Rebuttal Comment 1.1: Title: Response to the authors' comments Comment: Thank you very much for your detailed answers to my questions. I guess I did not understand enough about parallel and distributed processing, and the authors' comments convinced me. It is also very interesting to see the possibility of extending the error function bounds to important types of kernels. If this conjecture is true, the results of this paper would be even stronger. Taking into account other review comments and responses, I will raise the score by one.
Rebuttal 1: Rebuttal: We thank all reviewers for their encouraging comments and helpful feedback on our submission. All reviewers share a common question of whether and how our theory might be extended beyond descent direction approaches like GIBO to trust region algorithms like TuRBO. In general we agree it would be very interesting to analyze TuRBO theoretically, and we’re interested in investigating this as well. While at this point we think it’s probably possible, the analysis is slightly more involved than GIBO. In particular, we believe TuRBO must be more significantly modified than GIBO in order to analyze it. In particular, the rules for shrinking and expanding the trust region will likely be different than simply “grow after k successes, shrink after k failures.” We will revise relevant sentences in the paper to make it clear that we have only proved the convergence rate for search direction style local BO methods and list the convergence of TuRBO as important future work.
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Provable Advantage of Curriculum Learning on Parity Targets with Mixed Inputs
Accept (poster)
Summary: This paper presents a theoretical analysis of curriculum learning, in which a neural network first presented with “easier” examples is able to more efficiently learn a target over more “complex” examples. The authors specifically consider the problem of learning $k$-sparse parities over a distribution which is a mixture of the uniform distribution over the hypercube and a biased distribution over the hypercube (the easy examples). The main positive results study a two-layer neural network trained via a layer-wise curriculum version of SGD (Algorithm 1), where first the first layer weights are trained on samples from the biased distribution, and next the second layer weights are trained on samples from the mixture distribution. Theorems 3 shows that this algorithm learns $k$-sparse parities with respect to the mixture distribution in $\tilde O(d)$ steps of SGD. In contrast, the paper presents a lower bound showing that noisy SGD on the mixture distribution alone requires $\Omega(d^{1+\delta})$ timesteps to learn a $k$-sparse parity up to nontrivial error. The proof for the upper bound relies on the martingale plus drift argument to show that the $k$ relevant coordinates have been learned after the first stage, while the lower bound uses similar techniques to proving SQ lower bounds. Strengths: - The paper is well written. - I skimmed the proofs in the appendix and they appear to be sound. The experiments section demonstrates that the proposed curriculum strategy is effective in settings beyond those considered in the theory. - Curriculum learning is a common procedure in practice, and yet there is limited theoretical understanding of its behavior. The sparse parity problem presented here is an interesting setting to study curriculum learning, with a clear measure of “easier” samples. The results here adequately show that a neural network can obtain an efficiency improvement from first training on these easier samples, which I find to be a very nice result. Weaknesses: - The novelty of the paper, in comparison to the prior work [CM23], is a bit difficult to discern. It appears that the main difference is that the current paper uses SGD for multiple steps in the first stage and uses the “drift plus martingale” technique for the proof, while [CM23] considers a single large step of GD with large batch size in the first stage and shows that is sufficient for learning the support. However, Theorem 3 here seems very similar to Theorem 3.4 in [CM23]. Also, Theorem 4 still uses the one-step algorithm. Could you please comment further on the novelty of the current paper in comparison to the prior work? - A minor weakness is that the algorithm is a bit restrictive and has some nonstandard modifications. In particular, during the first stage the biases are chosen to be very large ($\Theta(d)$), and then resampled to different deterministic values before the second stage. Next, there is an $\ell_\infty$ projection on the weights during each step of stage 1. Could you please comment on the necessity of these modifications? I think the paper could benefit from additional discussion of these modifications. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - It appears that in Theorem 3, the algorithm receives far more samples from the biased distribution than the uniform distribution. In particular, during the first stage the algorithm receives $\tilde O(d)$ biased samples, while in the second stage receives $\tilde O(d)$ uniform samples. In contrast, when running SGD on the entire mixture distribution (like for the lower bounds in Theorem 5), the algorithm receives only $\rho$-times as many biased samples as uniform samples. In the regimes where there is a separation, $\rho$ is very small, in which case curriculum SGD is receiving far more samples from $D_\mu$. It thus seems like the success of curriculum SGD is not necessarily due to viewing the “easy” examples first, but rather from having access to far more “easy” examples than in regular SGD. Is my understanding here correct, and can you please comment on this further? I am open to increasing my score after further clarifications on the above point and my concerns in the weaknesses section. - Minor typos: In Algorithm 1, should line 3 be $\frac12 - \frac{\mu}{4}$ instead? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Limitations are adequately addressed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Please see the global response for the differences with the previous work [CM23]. - Q1: A minor weakness is that the algorithm is a bit restrictive and has some nonstandard modifications. In particular, during the first stage the biases are chosen to be very large Theta(d), and then resampled to different deterministic values before the second stage. Next, there is an $l_\infty$ projection on the weights during each step of stage 1. Could you please comment on the necessity of these modifications? I think the paper could benefit from additional discussion of these modifications. A1: Biases: We consider fixed bias parameters for simplicity of the theoretical analysis. Regarding the specific values that we chose, we initialized them to large constants in order to have simple expressions for the gradients of the network during the first part of training (eq. (25)). After the first part of training, we update the biases so that they are well spread in the interval $[-\Delta k, \Delta k]$ and the target parity belongs to the linear span of the hidden units at time $T_1$. $l_\infty$ projection: We added an $l_\infty$ projection to prevent weights from diverging. This projection is needed because the training process doesn't occur simultaneously for both layers. As a result, the training doesn't stop once the network fits the data. We will add a discussion of the above in the paper. - Q2: It appears that in Theorem 3, the algorithm receives far more samples from the biased distribution than the uniform distribution. In particular, during the first stage the algorithm receives Theta(d) biased samples, while in the second stage receives Theta(d) uniform samples. In contrast, when running SGD on the entire mixture distribution the algorithm receives only $\rho$-times as many biased samples as uniform samples. It thus seems like the success of curriculum SGD is not necessarily due to viewing the “easy” examples first, but rather from having access to far more “easy” examples than in regular SGD. A2: It is true that our current theoretical results cannot distinguish between having more easy samples or less. Indeed although we have proved an upper bound on the number of samples needed, we do not have any lower bound on the sample complexity. We have tried to explain this in Remark 4. Nonetheless, in the sample complexity experiments (Figure 1) we are using the exact same data and we only change the ordering for curriculum and standard training. - Q3: Minor typos: In Algorithm 1, should line 3 be $1/2-\mu/4$ instead? A3: Yes, thank you for pointing out the typo --- Rebuttal Comment 1.1: Title: Response to Authors Comment: Thank you to the authors for their response. The contribution of the current submission in comparison to [CM23] is now more clear to me, and so I have increased my score accordingly. I think it is still interesting to understand whether the benefit of curriculum learning shown here is really due to the order of the examples, or rather just having access to more easy samples. --- Reply to Comment 1.1.1: Comment: Thank you for considering the rebuttal and increasing your score. We agree that it would be interesting to prove the benefits of the proposed curriculum strategy regarding sample complexity as well. (We emphasize that the sample complexity experiments already show this empirically.)
Summary: The authors present a separation result between curriculum learning and standard learning in the number of noisy-(S)GD training steps for a 2-layer ReLU network for case of learning noiseless k-parity. While the precise training algorithm deviates from the common deep learning setup, experiments verify that learning k-parity can benefit from learning sparse inputs before harder dense inputs - mostly in terms of an improved sample complexity. Strengths: - Theoretical evidence for benefits of curriculum learning. - Additional positive results for the hinge loss. - Bound on sample complexity for the positive case. - The negative result allows for a comparison with the positive one in some settings. - The experimental results suggest a strong difference between curriculum learning and standard learning (yet, not exactly in the theoretical setup so far) in the context of learning a parity function. Weaknesses: - The analysis is limited to a single and not very general learning problem (learning k-parity). - The analysis makes assumptions that are quite different from practical deep learning setups: 1) Non-standard neural network initialization. 2) Non-standard loss (covariance loss) that penalizes errors on samples with true negative labels higher than errors on truly positive samples. The hinge loss results rely on an uncommon activation function. 3) The hinge loss results are relatively weak as they just state the existence of an initialization etc. that leads to successful training. Essentially, this is not stronger than an expressiveness results, because the initialization could just correspond to the optimal (potentially learned) parameters. 4) No noise in the data. 5) Curriculum learning and dense learning use learning algorithms that are different also in other aspects than just the order of samples. For instance, the batch size is required to be quite large in case of dense learning. Most importantly, the theorem for curriculum learning assumes a much larger proportion of easy examples $\rho$ than the standard learning. This is important because $\rho$ essentially controls how easy the learning problem is. In the experiments, for large enough $\rho$ dense learning seems to perform better than curriculum learning. (Btw, this observation is not explained by the theory.) 6) The learning algorithm is not standard SGD or GD but learns the layers separately. 7) The bias parameters are not trained (and it is left unclear in the main paper how they are initialized). - Mismatch between experimental validation and theory: 1) different loss (l2 in experiments, mostly covariance loss in theory) 2) k = 5 in most experiments, while theory assumes k >=6 for separation result. 3) MLP with four layers instead of 2. Experiments should also validate the theory and not only study fairly unrelated cases where we can make similar observations. The direct validation would be important to understand how relevant the constants in the theoretical results are, for instance. - The theorem statements are hard to understand because they rely on notation that has not been introduced in the main text. (Examples: $\mu^{k-1}$ or $\Delta$ or P.) - The proof techniques do not appear to be very novel but similar to previous work. - Missing information: How is the generalisation accuracy computed? I assume samples were drawn from the mixed data distribution, which depends on $k$ and $\rho$? How many samples were used in the experiments? This is important because it also influences how hard the learning problem is. - The theoretical and experimental results study the performance with respect to a test distribution that depends on the amount of easy versus hard examples. This is suboptimal because it can therefore make no claims about the question what kind of data would benefit learning the actual parity function (on all possible inputs). In contrast, the central question underlying curriculum learning is what kind of data (and which order) is best for learning a task. **Points of minor critique:** - Literature discussion: Given the literature discussion, the study of the specific problem of learning parities in the context of curriculum learning does not seem to be in its infancy, as claimed. Furthermore, [SMS22] does not really make the difficulty level criterion label dependent but dependent on the (irrelevant features). In some sense, the same holds also true for the submitted work, as the definition depends on the negative coordinates that also inform the label. (Yet, this does not change the fact that [SMS22] and the presented work are sufficiently different.) - More elements of the notation could be explained. For instance, Rad could be introduced as Rademacher distribution; Unit as uniform distribution; neural networks could be defined formally earlier, []_A could be defined mathematically, etc. The bias scale was not defined. How are the biases initialized? The size P of a neural network has not been defined precisely. Does it refer to the number of hidden neurons or number of parameters? Weak learning has not been defined/explained. - The set of sparse inputs does not have size $\rho$ on average according to the definition of $X_1$ on page 3 (different from how it is frequently discussed). Also $D_u$ contributes to $X_1$, while $D_{\mu}$ does not always. Later, $d$ is assumed to be large enough that the statement is approximately correct, but that comes a bit late (and is not very precise). - Only even values of $\mu$ have been studied in the experiments. Uneven averages should also be considered. - Not all curves in Figure 2 (middle and right) are visible. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: - To which degree does the performance actually depend on being exposed to the hard samples? Curriculum learning is based on the hypothesis that it is not only beneficial to learn from easy examples early but that learning from hard samples later also boosts the performance. I am aware the the negative result needs $\rho$ to be small. Yet, I suspect that learning only on "easy" examples would already be sufficient to learn a parity function because the easy examples carry all relevant information about the parity function. To analyze this question in more detail, it would be good to 1) make the generalisation accuracy independent of the mixed distribution and test whether the correct target function has been learned; 2) report how the performance changes during training. - Figure 2 (middle) implies a non-monotonous dependence of the accuracy in $\mu$. It would be good to study this in more detail and show a figure with $\mu$ on the x-axis. What could be an explanation? Does it matter whether $d\mu$ is even or uneven? - To which degree are the experimental results dependent on hyperparameter tuning? Where the standard learning and curriculum learning be tuned independently? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: The limitations have been pointed out under weaknesses and were mostly discussed by the authors. I do not foresee a major or immediate societal impact of this work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: - Q1. What about training only on easy samples? A1. Note that neural networks do not know that the target function is a parity function a priori, thus, they cannot recover the parity function only from the sparse data. (We agree that if the learner knew that the target is k-parity it would have been able to recover it only from the sparse samples, as identifying the support would have been sufficient.) Indeed, we observed that mismatch between the training distribution and test distribution would result in worse generalization and that training neural networks on sparse samples would cause the neural network to learn other lower-degree solutions rather than the actual parity function. - Q2. Non-monotonous dependence of the accuracy in $\mu$. A2. We note that this non-monotonous dependency is indeed reflected in our theoretical results as well. Particularly, the results in Theorem 3 depend on the quantity $L_\mu = \mu^{k-1} - \mu^{k+1}$, which is maximized near $\mu = 1$ but does not increase monotonically with $\mu$. Due to the challenges of defining sample complexity properly, we rather not to plot the sample complexity with having $\mu$ on the x-axis. Please also check Figure 4 which also shows the dependency on $\mu$. Finally, we remark that $d\mu$ is a real-valued and continuous quantity; we thus do not expect any different behavior regarding evenness. We will add more context regarding the dependency of the sample complexity on $\mu$. - Q3. Hyperparameter tuning. A3. We tried different values for hyperparameters (mainly batch size and learning rate) to ensure the robustness of our results and we did not observe any significant changes in the potential gains of the curriculum method. For the experiments presented in the paper, we selected hyperparameters based on the speed and stability of the convergence of the standard training. (We tried to use values that are common in practice as well, e.g., batch size of 64). To ensure fairness, we used the values that we picked for standard training for the curriculum method as well. We have explained our hyperparameter tuning in Appendix E.1.2. - Q4. Limitation to parity targets. A4. Indeed, the focus of this paper is on parity targets. However, we experimentally investigated the proposed curriculum method for Boolean functions other than parity targets (see the last part of Section 5). Our experiments suggest that the proposed curriculum method can still be beneficial for learning or weak learning some Boolean functions, but a deeper understanding of those is left to future work. However, we have also shown that there are functions for which the curriculum is not helpful. - Q5: Different assumptions from practical deep learning setups. A5: For the positive result, we have made some simplifying assumptions on the training settings so that the theoretical analysis becomes feasible. Note that some of these assumptions, such as layer-wise training, are quite common in the theory of deep learning. On the other hand, our negative result holds for all fully connected architectures of any depth, and with any activation and initialization that is invariant to permutations of the input neurons. Also, in our experiments we tried to use common deep learning settings. - Q6: Hinge loss result. A6: In Theorem 4, we prove that there exists an initialization such that, under appropriate hypothesis, SGD can successfully learn *any* $k$-parity. Thus, this is not just expressiveness, as one does not know which $k$-parity is the correct target and our initialization is agnostic of the target function. We will clarify the statement of the theorem to avoid this confusion and we thank the reviewer for pointing this issue out. - Q7. Curriculum learning and dense learning use learning algorithms that are different also in other aspects than just the order of samples. A7: We addressed this question in the general rebuttal. - Q8: Mismatch between experimental validation and theory. A8: We cover more general and common settings in our experiments comparing to the theoretical part. (As pointed out by the reviewer, some of our assumptions for the theoretical side are not common in practice, for this very reason we focused on the common training settings for the practice). Nonetheless, we emphasize that this is an extension of our theoretical results and not a mismatch. Additionally, we have also covered the setting of our theoretical results in the appendix. Particularly, in Figures 5,6,7 we have covered 2 layer neural network (with mean-field parametrization), hinge loss and covariance loss. We have also used parities beyond degree 5 in Figures 2 and 4. - Q9. How is the generalisation accuracy computed? Number of training samples? A9. The generalization accuracy is computed based on $D_{mix}$. In Q1 we explain the need for using the same distribution for training and test. The number of training samples is shown on the x-axis for experiments regarding sample complexity. For other experiments that work with fresh samples and number of iterations the number of training samples would be given by batch size (almost always 64) * number of iterations. However, we once again emphasize that samples are fresh and only seen once. - Q10. What kind of data is most useful? A10. Note that for each single experiment (or theoretical result) the training (=test) distribution is shared between curriculum and standard training to make the comparisons fair. In our setting, we need sparse data to start learning the function (detecting the latent dimensions) and we also need to have matching train and test distributions so sparse samples alone are not enough (see Q1 as well). This indeed shows why such a curriculum is helpful. - Q11. Points of minor critique A11. We thank the reviewer for these points. We will revise the text to clarify the points raised. We will clarify the notation and how the biases of Thm 3 are initialized in the main. --- Rebuttal Comment 1.1: Title: Score update Comment: I thank the authors for the clarifications and pointing me to additional experiments in the appendix. I have increased by score to a weak accept in response. --- Reply to Comment 1.1.1: Comment: Thank you for considering the rebuttal and increasing your rating.
Summary: This paper presents provable results showing the efficiency of curriculum learning for a specific problem setting with training data $(x,y)$, where $x\in(\pm1)^d$ is mixed distributed and $y=\Pi_{j\in\mathcal{S}} x_j\in(\pm1)$. What's more, the study utilizes a 2-layer fully connected network and the noisy-SGD training method Strengths: 1. This paper clearly proves the advantages of curriculum learning over standard learning by analyzing a well-defined problem. 2. Numerous results are also provided and highlight the reduced sample and iteration requirements of curriculum learning. Weaknesses: Please see Questions. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. This paper introduces a highly specific data setting. How can this setting be applied to real-world applications or more general settings? 2. Is there a particular significance to the notion of "sparsity"? I am curious about the evidence if the model was trained on dense data instead of sparse data initially. 3. In Theorem 3, why is $T_1$ independent to the first layer model size $Nd$, while $T_2$ is much larger than the second layer size $N$? Additionally, is the result evaluated using training accuracy? 4. In my opinion, a more appropriate point of comparison for standard training would be: **randomly** sample an equivalent dataset size (as in curriculum learning) to train the first layer, followed by training the second layer using the entire dataset. Since the first layer contains significantly more parameters, standard training might introduce overfitting due to the layer-wise training method rather than curriculum learning. Confidence: 1: Your assessment is an educated guess. The submission is not in your area or the submission was difficult to understand. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: - Q1. This paper introduces a highly specific data setting. How can this setting be applied to real-world applications or more general settings? A1. In our opinion, the closest curriculum method in the real world is the use of input length for NLP and reasoning tasks. Note that $+1$ is the identity element in the Boolean setting and thus can also be viewed as a padding element. Consequently, the number of $-1$’s can be seen as the length of the Boolean inputs (see related literature). We also expect similar results to be found in the learning of functions with Gaussian inputs. However, the goal of this paper is to prove the benefit of curriculum learning for a problem (with the same task on the same data distribution) where theoretical analysis is possible, and that is the reason that we have focused on the parity functions where we have been able to prove a separation result. - Q2. Is there a particular significance to the notion of "sparsity"? What about training the model on the dense data initially? A2. On sparse data (i.e., inputs with few negative bits), parities on different supports are correlated. In particular, $\mathbb{E}_x[\chi_S(x)\chi_T(x)] = \rho \mu^{|S \Delta T|}$ assuming $S\neq T$ ($\Delta$ stands for symmetric set difference, i.e., union minus intersection). This allows SGD to easily identify the relevant coordinates, and a subsequent fit on the whole dataset allows to learn the correct function. We remark that the same holds if we defined sparse inputs as those with few $+1$ bits, and mostly $-1$ bits (having negative $\mu$ – note that the absolute value of the quantity above does not change). On dense data (i.e., inputs with roughly half negative bits), parities on different supports are not correlated ($\mathbb{E}[\chi_S(x)\chi_T(x)] \approx 0$), which makes identifying the support challenging for any progressive learner (including SGD). For this reason, training initially on dense inputs only does not help for identifying the support of the parity, thus we expect such curriculum not to be beneficial for learning parities. - Q3. In Theorem 3, why is $T_1$ independent to the first layer model size $Nd$, while $T_2$ is much larger than the second layer size? Additionally, is the result evaluated using training accuracy? A3. The model size, and more specifically the number of hidden units, has to guarantee that the network can represent any $k$-parity. During the first layer training the network identifies the set of relevant coordinate, and $T_1$ does not depend on the number of parameters (note that at each step of training all weights in the first layer move). For the training of the second layer, we used standard results on convergence of SGD on convex losses. $T_2$ depends on $N$, however it is larger that that since we took a small learning rate ($\gamma_2 \approx 1/ Nd$). The result is evaluated using the training loss (covariance loss), which in our case can be related to the training accuracy (see Proposition 1). Note that in the experiments (other than the ones in Figure 3), we directly report accuracy. - Q4. Using first layer then second layer training for both the curriculum and standard training; since the first layer contains more parameters, standard training might introduce overfitting due to the layer-wise training method rather than curriculum learning. A4. We remark that the benefits of the curriculum algorithm is not due to the layer-wise training. First note that in the experiments we train all the layers jointly for both the curriculum and standard training. For the theoretical results, the negative result for standard training (Theorem 5) is also valid whether the training is done layer-wise or jointly. We will clarify this issue in the paper as well. --- Rebuttal Comment 1.1: Comment: Thank you to the authors for their thoughtful response and clarification. After considering the other reviews, I have decided to maintain my original rating. --- Reply to Comment 1.1.1: Comment: Thank you for reading our rebuttal and acknowledging it.
Summary: This paper studies the impact of curriculum learning for the family of parity functions. Compared to previous work, in which their settings have been distanced from a realistic setting, e.g., considering a non-common activation function with a particular initialization, or a non bounded learning rate, this paper solves all of those issues, although still in special case of covariance loss. They have been successful to show how the order of samples given to the model will effect the sample complexity, both in theory and experiments. Besides, to the downsides of the standard SGD algorithm that samples are completely shuffled, they have provided quite tight upper bounds for the final accuracy of a general fully connected neural network with limited total size, when assuming the procedure is noisy to make it resemble SQ-learning setting and use similar proofs. Strengths: 1- Authors' suggested setting has been closest to reality so far. Unlike previous work they use a bounded learning rate throughout the training process, whereas the former algorithm had proceeded by using only one step of sgd with a learning rate that can get arbitrarily large. 2- The combination of a lower bound for test accuracy when using curriculum learning (giving correlated samples at first so that it approximately learns the support) and an upper bound in case of standard random batch sgd, illustrates advantages of the proposed algorithm. 3. Experiments are in fully agreement with results of theories, and they have implemented various settings for this: Comparison on number of samples, dependence on training steps, and different parameters. Weaknesses: 1- Improvements over previous work is not very conspicuous. Essentially there is no new point in the paper but having former arguments in a more rigorous manner. 2- Additionally, they don't give any assuring bounds besides two layer neural network, which is the exact model their reference work has considered. 3- There are many non-free hyper parameters in their training process. Having assumed that the number of training steps for the first part of algorithm (T_1) depends on parity size is questionable. This can be the reason why previous work assumed only one step of SGD on sparse data. 4- When attempting to come up with similar bounds besides the covariance loss, which is not common between experimentalists, they only consider hinge loss. And this causes their advantage over previous work to disappear because learning rates gets unbounded. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: 1 - Would you please make a clarification on new points of your paper? It seems like your reference(CM23) has covered most of the claims as in your paper. 2- I suggest to include other tasks that curriculum would come to be handful in your work. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 4 excellent Contribution: 2 fair Limitations: The main limit of this paper is not to go anywhere else other than the parity task. Many papers have investigated the same object before. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Please see the global response for the differences with the previous work [CM23]. We address the rest of the remarks and questions in the list below. - Q1. Does $T_1$ depend on $k$? A1. $T_1$ depends on $\mu^k$, through the parameter $L_{\mu}$. This dependence appears in the 1-step arguments as well, and we believe that it cannot be removed. However, we remark that choosing $\mu$ sufficiently close to 1 (e.g. $\mu =1-\frac{1}{d}$ or $\mu =1-\frac{1}{k}$) allows bounding $\mu^k$ independently from $k$. - Q2. Hinge Loss learning rate is unbounded. A2. We believe that Theorem 3 could be extended to include the setting with hinge loss. This however would require a more complicated proof (e.g. by adding a second $l_\infty $ projection to the iterations of SGD). - Q3. How does the proposed curriculum method generalize to other tasks? A3. We empirically investigated the suggested curriculum method for Boolean functions other than parity targets (see the last part of Section 5). We have shown that the suggested curriculum method can still be beneficial for learning or weak learning of some Boolean functions. However, we have also shown that there are functions for which the proposed curriculum is not helpful. Based on these experiments, we put forward the conjecture that curriculum can be beneficial in learning of the lowest degree monomials of a function and therefore (based on the structure of the function) it might be helpful or not. Beyond the Boolean functions, we believe similar observations can be found for functions on the Gaussian inputs. We believe the most analogous real-world application of our setup is the use of input length in NLP/reasoning tasks (note that here $+1$ is the identity element so it can be also viewed as the padding element and the number of $-1$’s can be seen as length – see related literature). --- Rebuttal Comment 1.1: Comment: Thanks for the clarification on the results of this paper. Having reviewed the contributions of this paper and the rebuttal by the authors, I decided to improve my rating and give this a weak accept instead. --- Reply to Comment 1.1.1: Comment: Thank you for considering the rebuttal and increasing your score.
Rebuttal 1: Rebuttal: We thank all reviewers for their constructive comments. We address the remarks and questions in the lists below. ## Comparison to [CM23] Our result is not only an improvement of [CM23] in terms of having more natural training settings and hyperparameters (i.e., we use SGD with bounded batch size and bounded learning rate in Theorem 3), it is also establishing a separation between learning with and without curriculum on a common sampling distribution, while [CM23] does not. More specifically: 1. We prove a separation between curriculum training and standard training on a common dataset (sampled from $D_{mix}$) in contrast to [CM23], where curriculum training involves both sparse and dense inputs and standard training involves dense inputs only. To the best of our knowledge, our paper is the first work that establishes a rigorous separation between training on adequately ordered samples and randomly ordered samples drawn from the same distribution, and our theorem gives a condition on the mixture under which this separation holds (interestingly the separation does not take place for all mixtures parameters). 2. We prove a positive result for layer-wise curriculum SGD with or without noise with bounded batch size and bounded learning rate (Theorem 3). This involves separating the dynamics into a drift term and a martingale term and adequately bounding each contribution. While such drift-martingales techniques appeared recently in previous works for spherical and Gaussian inputs, to the best of our knowledge, none of the references mentioned in the paper consider Boolean inputs and parity targets. In contrast, [CM23] makes a 1-step argument with an unconventionally large learning rate to show that correlations with the support can be obtained in the hidden layer, without tackling the full dynamics. 3. The martingale proof technique allows us to obtain a tighter upper bound on the number of samples needed to learn with curriculum, compared to the 1-step argument used in [CM23]. In particular, in Theorem 3 we prove that with curriculum we can learn with $\tilde O(d/\rho)$ samples, while applying the same proof technique of [CM23, Theorem 4] allows to obtain an upper bound of only $\tilde O(d^2/\rho)$ samples. 4. We prove a non-trivial negative bound for standard training on inputs sampled from $D_{mix}$, while the negative part of the separation of [CM23] relies on previously established lower bounds for learning parities on uniform inputs. We emphasize that our lower bound holds for any fully connected architecture of any depth, with any activation, any learning rate schedule (e.g., layer-wise) and permutation-invariant initialization. ## Further comments on curriculum and standard training in the theoretical results (Reviewer 8Apd) - Q7: Difference between currriculum and standard learning. A7: Batch size: indeed our negative result holds only for noisy-SGD with large batch size. We are not aware of any technique for proving lower bounds for SGD with small batch size. Our positive result holds for the same setting as our negative result AND for other settings as well, such as SGD without noise and any batch size. Thus, our theoretical separation holds for noisy-SGD with large batch size. On the other hand, our experiments use SGD with standard batch size for both curriculum and standard training. Proportion of easy examples: Our separation holds for small $\rho$, thus for datasets with a small fraction of easy examples. In Theorem 3, we assume the number of easy examples to be at least $\tilde \theta(d)$, which implies that our dataset must be of size at least $\tilde \theta(d/\rho)$. We remark that we do not assume a large proportion of easy examples, we assume instead a large enough dataset (indeed we prove an upper bound on the number of hard samples needed, but the algorithm can use more). Theoretical results for large $\rho$: Indeed, our experiments surprisingly show that curriculum can be harmful for datasets with many easy samples (at least in terms of number of training steps needed to learn). This case is indeed not covered by our theoretical results. While we do not have a negative result for curriculum training, Theorem 3 gives, in the setting considered, an upper bound on the number of training steps needed to learn with curriculum that does not depend on $\rho$. On the other hand, if we try to apply the footprints of the proof of Theorem 3 to standard training, we would get an upper bound that decreases with $\rho$ (it would depend on the expectation of a k-parity under $D_{mix}$). We do not expect rigorous arguments for such bounds to be easy extensions of our work.
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Bayesian Kernelized Tensor Factorization as Surrogate for Bayesian Optimization
Reject
Summary: This proposes a novel surrogate for Bayesian optimization (BO), kernelized tensor factorization (BKTF). The authors claim that BKTF is able to model more complex functions (nonstationary, nonseparable) compared to additive and product kernel Gaussian processes. For inference, they leverage Gibbs sampling to do full Bayesian inference. They compare against BO with the regular Gaussian process surrogate and tree-structured Parzen estimators. Strengths: * The papers proposes a novel surrogate model, BKTF, for BO. I believe this is new, and as long as the authors can demonstrate the utility of BKTF, this will be a valuable contribution to the BO community. Weaknesses: * The proposed strategy uses Gibbs sampling, which is well known scale poorly with the number of parameters and correlations. Thus, I am concerned that this method's performance will fair poorly at even moderately higher dimensions and number of observations than those considered in this paper. * On a similar note, unless I'm not mistaken, the method requires to infer the latent functions (or bases) $g_d^D$. This means inference needs to be performed at every BO steps. This contrast to GPs, where, even if one decides to do fully Bayesian inference, he does not to run MCMC at every step. Thus, the method comes with a reduction in flexibility. If the authors believe that their method can work with less expensive inference strategies, say, VI or MAP, then this should be demonstrated and evaluated. * The paper claims that the experiments are "extensive" (line 73), but unfortunately, I find that the experiments conducted in this paper cannot be considered to be extensive in today's standards. See for example [1,2], which I would consider extensive. Furthermore, at this small scale / low budget applications, noise can very easily swamp the effects. Therefore, I would expect a lot more runs. Moreover, the hyperparameter tuning experiments in Section 5.2 are not reflective of real-world use cases. So these are gain inadequate to evaluate the real world performance of BKTF. * Furthermore, the baselines are not enough. The research space for alternatives to BO surrogates has certainly been active, but here only the tree-structured Parzen estimator is considered. In fact, the paper mentions that BKTF here corresponds to a two-layer deep GP. Then, they should compare against deep GPs for an apple-to-apple comparison. The computational costs/scalability of DGPs would probably be comparable so this would be a more appropriate comparison. Technical Quality: 1 poor Clarity: 2 fair Questions for Authors: ### Additional Major Comments * The paper repeatedly uses the term "separable" to characterize the SE-ARD kernel. I have never seen "separable" used in the context other than "additively separable." Multiplicative kernels model the correlations between dimensions, thus I'm not sure what BKTF is doing more than SE-ARD. * Line 49-50: The paper claims that deep GPs require a large number of samples. I presume this statement is relative to BKTF, but I do not find evidence in this paper that BKTF require less samples. * Line 251: What does "low-rank" mean in this context? * The paper does not cite the original source when referring to existing methods. For example: * Line 56: CANDECOMP, PARAFAC * Line 69: Slice sampling * Line 117: I think the GP-UCB paper [3] should be cited here. * Line 115: $\beta$ in GP-UCB is not necessarily a tunable parameter in the sense that, the optimal configuration, under assumptions, is very specific. See [3]. * The papers applied UCB acquisition functions to their method, but UCB is known to be conservative, and not that competitive among acquisition functions. Thus, I recommend using other acquisition functions. * Section 4: Considering that this paper proposes for an alternative surrogate, the related work section should put the proposed method in context of alternative surrogates. Unfortunately, the current form mostly discusses kernel factorization, which I think is less useful for the BO community. After all, the papers on BKTF referenced herein could be referred to obtain context. ### Minor Comments * Above Line 158: kerenl $\to$ kernel. * Between Line 87 and 88: Normally, we denote that the expectation of the noisy version of $f$, for example, $\hat{f} = f + \epsilon$, is minimized instead of saying that we optimize $f$ directly. ### References I am not affiliated with any of the papers and authors mentioned in this review. 1. [1] Bodin, Erik, et al. "Modulating surrogates for Bayesian optimization." International Conference on Machine Learning. PMLR, 2020. 2. [2] Malkomes, Gustavo, and Roman Garnett. "Automating Bayesian optimization with Bayesian optimization." Advances in Neural Information Processing Systems 31 (2018). 3. [3] Srinivas, Niranjan, et al. "Information-theoretic regret bounds for gaussian process optimization in the bandit setting." IEEE transactions on information theory 58.5 (2012): 3250-3265. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 1 poor Presentation: 2 fair Contribution: 3 good Limitations: Yes, in Section 6. However, I think the limitations I've discussed above could also be included. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate your time and thank you for your review and your acknowledgment of the contribution of this model. Through the review, however, we believe there is some misunderstanding and misinterpretation of the paper. In the response below, we will try to explain and clarify these points as much as we can. $\textbf{for Weaknesses:}$ 1. We have discussed the computational cost of BKTF with respect to model inference and AF computation in response to all. We have to say that Gibbs sampling is not the reason for the scalability issue; instead, we consider MCMC the best way to achieve uncertainty quantification for complex models. The main computation bottleneck is the grid enumeration-based AF computation; we have considered randomly selecting candidate points for AF as an alternative solution for higher dimensional problems and tested it on benchmark functions. We have also tested on a higher dimensional 10D Griewank function in the experiments, see Table A and Figure A in PDF. We think the newly added experiments could explain the doubts about the scalability of data dimensions and observation points. We will also add the average running time per function evaluation later. As can be seen, BKTF consistently obtains the best performance on all tested functions, which demonstrates its efficiency and effectiveness to solve BO problems. About building a fully Bayesian model: we want to highlight that it is not the problem but is one key contribution of this work. Fully Bayesian makes BKTF generate high-quality uncertainty totally from observations, which provides candidates that leverage global consistency from the data and make it possible to achieve efficient global search with a much less budget. 2. $\boldsymbol{g}_d^r$ do need to be sampled, but they can be updated only using the observation points, see the model inference in Appendix. However, we argue that this does not affect the computational cost of model inference, which is still $\mathcal{O}\left(n^3\right)$ if utilizing point-wise sampling. For "even if one decides to do fully Bayesian... does not run MCMC at every step": We disagree. We have to say once mentioned fully Bayesian, with the inclusion of a new data point, the model parameters and hyperparameters need to be updated. "with a reduction in flexibility": We do not see why fully Bayesian inference reduces flexibility. As for VI/MAP, again the inference is not the main cost, see our discussion in response to all. Even though these algorithms have the potential of reducing computational cost, however, at the cost of the quality of uncertainty quantification, which is critical to the efficiency of BO. This has also been discussed in related work that develop fully Bayesian AF/GP for BO. 3. We have conducted more experiments, including higher dimensional functions and more baselines. We conduct 7 benchmark functions, 4 ML hyperparameter tuning tasks, and 1 synthetic process (in Introduction). For the "noise", we believe whether the true function has noises or not, this will not affect the result. Since in the estimation, we always assume noisy data no matter how the true data is generated. The advantage of BKTF is the same. "experiments in Section 5.2...not reflective...": We are extending the experiments for this section in terms of adding more baselines and catigorical input variables.The considered settings for the ML hyperparameters are given in Table 4 in Appendix; we think these tasks are similar to the experiments in the second reference you gave and the similar settings have also been tested in related work. 4. We added more baselines, including additive GP, Bayesian neural networks, and non-GP approaches. But we did not compare with deep GP, since it does not have analytical mean and uncertainty and cannot provide closed-form AF equations. One advantage of BKTF is that it is flexible and the model can be efficiently sampled leveraging the tensor factorization structure, which is however difficult to be implemented for deep GP. $\textbf{for Questions: Major comments:}$ 1. We have discussed the nonstationary and nonseparable processes in the response to all. A kernel/covariance function is separable if it can be decomposed into a product of functions along each dimension. SE-ARD is a stationary and separable kernel. The kernel representation of BKTF shows that the covariance has a sum structure when $R\ge 2$ (thus nonseparable) and location-specific due to the latent basis (thus nonstationary). 2. BKTF indeed requires fewer samples to achieve better results compared with other surrogates (although we do not have deep GP), as demonstrated in almost all experiments. This is the key advantage of the model. 3. It means the true functions are not full-rank and have global patterns. 4. Thank you, we will cite the corresponding references in the revised version. 5. We agree that $\beta$ is not a tunable parameter, we actually did not tune it. 6. Bayesian UCB is a natural solution given the MCMC samples we obtain through model inference. It should be noted that other AF (e.g., EI) are not considered because we do not have analytical uncertainty. 7. We agree that we should cite more related work about BO and have revised the paper, thank you; but we think the discussion about kernel representation is also critical and is a problem that might have been ignored in the BO community, which can be a simple but efficient strategy to improve the performance. $\textbf{Minor Comments:}$ Thanks for the detailed comments. 8. the typo has been corrected, we have also re-checked the whole paper. 9. in the paper, we denote the observation data $y$ as the noisy version of $f$. --- Rebuttal Comment 1.1: Title: Response Comment: I sincerely thank the authors for their response. I also appreciate the additional experiments. However, I find that there are some disagreements, especially about the utility of the proposed method, which I wish to highlight here. > About building a fully Bayesian model: we want to highlight that it is not the problem but is one key contribution of this work. Fully Bayesian makes BKTF generate high-quality uncertainty totally from observations, which provides candidates that leverage global consistency from the data and make it possible to achieve efficient global search with a much less budget. > > Even though these algorithms have the potential of reducing computational cost, however, at the cost of the quality of uncertainty quantification, which is critical to the efficiency of BO. This has also been discussed in related work that develop fully Bayesian AF/GP for BO. > > We do not see why fully Bayesian inference reduces flexibility. I do not think that being fully Bayesian is bad. However, there is no clear conclusion about the benefit of being fully Bayesian in the BO setting. See the works of De Ath et al. (2021), Berkenkamp et al. (2019). And trust me; I am as Bayesian as you can get. But specifically for BO, I think it is important to acknowledge that the benefit of being fully Bayesian is not yet entirely clear, and therefore provide alternative scalable strategies such as MAP. Furthermore, variational inference (VI), and Laplace approximation are fully Bayesian, and potentially much more efficient than Gibbs. > We will also add the average running time per function evaluation later. > > As for VI/MAP, again the inference is not the main cost, see our discussion in response to all. I think this would be essential if the authors wish to resolve any doubts about the scalability of the method. I certainly believe the authors. However, what we need here is scientific evidence. In particular, the computational cost needs to be compared against *non-fully-Bayesian* GP methods (MAP-II), which are widely used. Furthermore, in terms of computation, I checked the code, and it seems that the authors are using random-walk Metropolis-Hastings (RWMH) for the GP baselines. This is *very* concerning. Excuse me if I missed this detail somewhere, but the paper does not seem to report how well MCMC is converging, specifically, $\widehat{R}$ metrics, effective sample sizes, and average acceptance rates. This is important because Gibbs sampling is insensitive to hyperparameters while RWMH is very sensitive. (I would like to note that in most BO papers NUTS and slice sampling are mostly used for the tuning issues.) > We added more baselines, including additive GP, Bayesian neural networks, and non-GP approaches. But we did not compare with deep GP, since it does not have analytical mean and uncertainty and cannot provide closed-form AF equations. One advantage of BKTF is that it is flexible and the model can be efficiently sampled leveraging the tensor factorization structure, which is however difficult to be implemented for deep GP. I appreciated the additional experiments, but I'm not convinced why deep GPs cannot be used. Deep GPs have been used for BO before and can be used with Monte Carlo acquisition functions. I think comparing with deep GPs is critical here, because it is the closest baseline in literature to the proposed method. > The definition of non-stationary and non-separable processes. Can the authors specifically clarify whether this term has been used in the literature before? I am confused that the authors use the word independent here; the kernel is multiplicative across the dimensions how can be this independent? ### References George De Ath, Richard M. Everson, and Jonathan E. Fieldsend. 2021. How Bayesian should Bayesian optimisation be? In Proceedings of the Genetic and Evolutionary Computation Conference Companion (GECCO '21). Association for Computing Machinery, New York, NY, USA, 1860–1869. https://doi.org/10.1145/3449726.3463164 Berkenkamp, Felix, Angela P. Schoellig, and Andreas Krause. "No-Regret Bayesian Optimization with Unknown Hyperparameters." Journal of Machine Learning Research 20 (2019): 1-24. Balandat, Maximilian, Brian Karrer, Daniel Jiang, Samuel Daulton, Ben Letham, Andrew G. Wilson, and Eytan Bakshy. "BoTorch: A framework for efficient Monte-Carlo Bayesian optimization." Advances in neural information processing systems 33 (2020): 21524-21538. --- Reply to Comment 1.1.1: Comment: Thank you for your reply. We discussed and replied below, hope can address your concerns. 1. It seems that a major concern is why we stick to the "fully Bayesian" method in BKTF. The main reason, which we believe has been highlighted in our paper, is that BKTF does not have analytical posterior distribution to support uncertainty quantification (UQ). This is the key difference from GP, where analytical posterior can be used for BO tasks. The main reason why not using MAP is that MAP only provides a point estimate without full UQ. Thus estimating BKTF with MAP cannot support BO. Again, given the simplicity of tensor decomposition model, most variables enjoy analytical posterior and it becomes natural to use MCMC. For VI, we agree that it is possible to use VI and it has been used in the GPFA paper([R1] J. Luttinen et al., “Variational Gaussian-process factor analysis for modeling spatio-temporal data,” NeurIPS, 2009) to learn latent factors. However, the estimation for kernel hyperparameter is still MAP without UQ. Two reasons we prefer to not use VI for BKTF. Firstly, VI/Laplace involves approximations for Bayesian inference (which we do not think can be called fully Bayesian; fully Bayesian indicates directly updating from the posterior). Generally, simplified assumptions are made for the posterior distributions for model parameters, such as the independent assumptions between variables in posteriors in VI, which may overlook some posterior correlations, obtain inaccurate posteriors, and largely impact the quality of UQ particularly when the data size is small (see [R2] A. Sauer et al., “Non-stationary Gaussian Process surrogates,” arXiv preprint, 2023). As high-quality uncertainty is the most crucial part of BO, the poor uncertainty of VI will undermine the advantage of BKTF and affect the global search efficiency for BO. Secondly, As we know, VI is mainly used for a large number of observations. We have explained the computational cost for this work: about model inference, the cost is at least better than GP inference, and is not the main cost for BO tasks that only has hundreds of observations. Specifically, here observations are less than 100 for most of the tested functions. The average running time of BKTF per evaluation for 2D Branin is 0.35s; even for 10D Griewank, the time is 2.33s, which we believe is acceptable and really not a problem here. In such cases, we do not see the benefit of having the efficiency of VI but losing the high-quality UQ of MCMC. Overall, from our results and the conclusions in related studies, MCMC indeed provides better performance in BO; MCMC is not a problem for BKTF and we develop an efficient sampling algorithm; MCMC is important for BKTF to achieve the current results. Therefore we suggest using MCMC in the proposed framework. We will definitely consider VI in future research if it is really needed (e.g. when having a large data set), we will add the discussion about VI in revised paper. 2. "...RWMH for...'': We'd like to clarify that BKTF is implemented exactly as described in the paper (and appendix): The latent factors and weights are updated with Gibbs sampling, and the GP kernel hyperparameters are updated with slice sampling. We did not use RWMH in the model. 3. "...deep GPs...": We have tried to use deep GP here; it does not have analytical uncertainty and needs approximation methods for AF. We did consider Bayesian deep GP with MCMC AFs (as you mentioned). The main problem is that the model parameters (multivariate variables) in latent layers do not have analytical posteriors and a complicated sampling algorithm is required. In a recent and relevant work [R3] A. Sauer et al., “Active learning for deep Gaussian Process surrogates,” Technometrics, 2023, ESS is used to address this problem, which still needs more than thousands of MCMC samples for inference per evaluation. In contrast, BKTF has analytical posteriors for the latent factors ${g}_d^r$ (a closed-form Gaussian), which can be updated easily and efficiently. Given the time limit, we could not adapt [R3] for BO tasks and thus did not complete the experiments. However, we believe that BKTF can be seen as a more elegant framework that achieves similar model flexibility as a two-layer deep GP but with much better sampling/inference efficiency (less time cost). We will add a more detailed discussion about the connection with deep GP in the revised paper. 4. "non-stationary and non-separable...?" Yes, certainly. Stationarity and separability are important concepts when defining covariance/kernel functions. We refer the reviewer to [R4] M. G. Genton et al., “Cross-covariance functions for multivariate geostatistics,” Statistical Science, 2015 for a full review of related definitions. For a more recent reference in machine learning, we refer the reviewer to [R5] K. Wang et al., “Nonseparable non-stationary random fields,” ICML, 2020. We will better introduce related definitions if we can revise the paper.
Summary: Bayesian optimisation most commonly uses Gaussian Processes with the Squared exponential or Matern kernel as the surrogate model. The authors propose a new type of surrogate model, "Bayesian Kernelized Tensor Factorization" which introduces some advantages and disadvantages over Gaussian Processes. There seems to be prior work investigating these models for surrogate modding in general, and this paper is a followup applying these models within Bayesian optimisation in particular. The papers introduces the model which models the data $\{x_i, y_i}$ from a black box function $y=f(x)$ as a sum of functions where each function is a product of 1 dimensional GPs, e.g. in 2D, leteach $g()$ be a 1D GP then $$ \hat{f}(x_1, x_2) = \sum_i g^i_1(x_1)g^i_2(x_2) $$ which is a continuous analogue of how a matrix can be represented by it's SVD or eigen decomposition. This concept generalizes for multiple input dimensions (e.g. $g^i_1(x_1)g^i_2(x_2)g^i_3(x_3)g^i_4(x_4).....$) and the authors discretize the search space into a grid hence the implementation uses tensors. and it positive properties, - to be able to model function with separability (where variables do not interact like in additive kernels) and - non-stationarity. In my interpretation, the thesis of the paper is that these properties are significant disdvantes and using a model that has these properties enables a performance improvement. The disadvantage of the proposed model is that inference is no longer in closed form (a product of Gaussian random variables is not another Gaussian) hence an MCMC method is proposed to sample function values at points across the input space. While one could use a random discretization, (e.g. a latin hypercube, or a cluster around the current best point) given the product structure of the surrogate model, there appear to be implementation benefits using tensor and matrix Kronecker products if the discretization is a fixed grid, discretize each dimension and build a a Cartesian product of each dimension to have a full grid. A range of synthetic and hyperparameter tuning benchmarks show the new model performing favourably with standard GP using SE-ARD kernel. Strengths: - in theory, I really like the idea of the model, in particular, that any matrix can be decomposed by SVD, hints that any function can be decomposed into a sum and product over functions of each dimension, i.e. the proposed model is, in theory, a universal approximator? Although intuitively, both BKTF and SE-ARD can model any smooth non-stationary surface, discontinuities and kinks are not modellable. - the inclusion of grid based GP methods is nice to see, and shows how much the grid decays performance compared to using the full unrestricted continuous space Weaknesses: # Technical - the proposed Cartesian discretization, $S_D$, scales exponentially with input dimension, and presumably contains _a lot_ of useless points in empty parts of the search space would a random discretization (LHC or Gaussian around current best $x$) be so much worse? Given a random set of points $X_D$, it is trivial to compute a the joint prior density $P[f(X_D)]$ density and the likelihood is just Gaussian $P[y_i|f(x_i)]$, sampling function values can be done with any off-the-shelf MCMC method. - I believe at least an additive GP should be a baseline. If non-stationary and non-separability are the main advantages of the BKTF model, presumably an additive GP with 2 kernels per dimension (matching CP rank=2 for BKTF) is an obvious baseline that has separability, such a baseline is exactly equation (7) but with sum-sum instead of sum-product. From this perspective, BKTF is simply an additive GP (that can only model separable variables) with a product over dimensions instead of a sum and this one change introduces a lot of engineering overhead (MCMC inference vs closed form inference) but also introduces more modelling power (separability can be modelled), given a high enough CP rank (CP rank =1 is just a product of 1D funs and is not separable). - the related work consists of two paragraphs, the first is discusses prior work on BKTF (and feels a bit repetitive), the second focuses on stochastic process models. I feel the novelty of this paper is in using another surrogate model inside BO methods, and given the large body of BO work, there have been many works acknowledginbg the limitations of SE-ARD and proposing alternative models that are not cited or empirically compared to - [Bayesian Neural networks](https://scholar.google.com/scholar?hl=en&as_sdt=0%2C5&q=bayesian+optimization+neural+networks&btnG=): - Bayesian optimization with robust Bayesian neural networks, NeurIPS 2016 - Scalable bayesian optimization using deep neural networks, ICML 2015 - Multi-fidelity Bayesian optimization via deep neural networks: NeurIPS 2020 - [Deep GPs](https://scholar.google.com/scholar?hl=en&as_sdt=0%2C5&q=bayesian+optimization+deep+gaussian+&btnG=) - Bayesian optimization using deep Gaussian processes, Arxiv # Presentation (in my personal subjective view) some changes to presentation would have made the paper far more accessible to me - can "CANDECOMP/PARAFAC" be simply described as a tensor generalization of SVD to make it easier for readers? - L119: "we construct a D-dimensinoal Cartesian product Space", can we just say "grid" like the authors do for the rest of the paper? - L81: kronecker product is introduced and never used again in the main paper - Section 3.1 would be much easier for me to understand if Eq (7) and (8) are introduced first, then next Section 3.3 (model inference) describes the grid and Equation (6) and MCMC details and the justification for the grid. - Section 3.2 is nice to mention but for me distracts from the main paper hence would be much better suited to the appendix. - L192: given a mean and uncertainty, this seems to be standard UCB, why is "Bayesian-UCB" defined? Technical Quality: 3 good Clarity: 2 fair Questions for Authors: - the main body of the paper as it is makes the grid feel unnecessary in my view, the paper lacks a justification. Is the grid discretization _really_ required? L172 acknowledges a grid is not required, it is not _required_ for MCMC, the model structure, or for BO. It seems the grid is an implementation choice that does make nice use of the kronecker structure in the model but also introduces a exponential scaling limitations (12**6 = 3m points in the Hartmann experiment). Would the authors mind adding a "memory used" or "time consumed" column to Tables 2 and 3 top show practical implications of the discretization? Or add baseline with a randomized discretizations instead? - can the authors include 1D additive GPs as a baseline, even better adding two kernels (with independent hyperparameters) per dimension would be a very close model the BKTF with CP rank=2 - at least one Bayesian neural network baseline model would compare this surrogate model with other well studied non-GP surrogate models, e.g. https://github.com/automl/RoBO, (though I am sure there are newer implementations) Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: - as above mentioned comment, discretizations in higher dimensions are generally considered bad practice, in particular, ungioded/naive grid discretizations that include many dead points. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate a lot for your thorough and detailed review! For the comments in "Summary", 1: we are really glad that you talked about the "random discretization", since we also discussed this many times when proposing the model, we will talk more about this in the following; 2: for "implementation benefits...": The idea behind is correct, although it's not Kronecker product, the mode-$k(k=1,\ldots,D$) unfolding matrix of the data tensor can be directly reconstructed with Khatri–Rao product and matrix multiplication of latent matrices. With CP decomposition we have $\boldsymbol{Y}_{(k)}=\boldsymbol{G}_k\left(\boldsymbol{G}_D\odot\cdots\odot\boldsymbol{G}_k+1\odot\boldsymbol{G}_k-1\odot\cdots\boldsymbol{G}_1\right)^{\top}$. Yes, this is the reason we reconstruct the whole grid space to select the next query point instead of random sample in the first place. For the comments in "Strengths", we are very glad that you like the idea of the model, and think it could be a universal approximator in theory. About "...discontinuities and kinks...": BKTF can model discontinuous/categorical processes through a Wishart prior on the corresponding dimensions; we mentioned this in the end of the paper, see line 364-365. In addition, BKTF (for nonstationary and nonseparable processes) could be way more flexible than a GP with SE-ARD kernel (stationary and separable). Please refer to the related discussion in the response to all. For other comments, there are mainly two concerns: (a) should consider random discretization; (b) should compare with additive GP and other baselines. We thank you for the constructive comments. We truly hope the answers below will solve your concerns and hope you can consider increasing the score for this work. $\textbf{for Weaknesses: Technical:}$ 1. We have discussed the computational cost of BKTF, including the AF computation with grid and random discretization in the response to all. The enumeration-based AF requires more time and memory costs but also finds the optimum with fewer "experiment" budgets, especially for low dimensional problems, so it is still suggested when the AF cost is trivial compared with the problem itself. The random selection-based AF provides an alternative solution to deal with higher dimensional functions and can alleviate the curse of dimensionality, but needs more iterations of function evaluations to find the global optima. We have compared BKTF-grid (reconstruct the grid space for AF) and BKTF-random (randomly selected points for AF) on benchmark functions, the results in Table A and Figure A (see PDF) are consistent with the conclusion. 2. We have added additive GP, with the sum of two 1st-order additive kernel, i.e., sum of two kernel functions per dimension, as the baseline for experiments on benchmark functions. Following the reviewer's description, such additive GP matches $R=2$ BKTF. We have to say that BKTF is not equivalent to additive GP. The intrinsic difference is that the kernel representation of BKTF is built on the latent factors which are also learned from the data, not the kernel functions (see Eq.(10) in the paper). The results on the benchmark functions have also demonstrated that BKTF could be much more flexible compared with additive GP. 3. Thank the reviewer, we agree we should discuss more related work about the studies in BO. We have cited and discussed the recommended references in the revised paper. $\textbf{for Weaknesses: Presentation:}$ 1. The CP decomposition is different from SVD as it does not require the basis to be orthonormal. We may say that CP decomposition is a tensor generalization of matrix factorization (MF). 2. Yes we definitely can. 3. Sorry we removed the definition here and introduced it in the Appendix before it is used. 4. We agree that the grid is not necessary for model inference, but Eqs. (7) and (8) are actually obtained based on the underlying model assumption of kernelized CP decomposition. 5. We agree that the discussion about kernel representation and connection with other GP-related models is not the main problem in the BO task, but the flexible kernel representation of BKTF is actually the key motivation of introducing the model. We will put it after Section 3.4 of the original paper to make the methodology part unbroken and easy to read. 6. We define the AF as Bayesian-UCB because the mean and uncertainty are obtained in a Bayesian way from MCMC samples, thus the AF does not have analytical equations for BKTF, which is different from the closed-form results in GP surrogates. $\textbf{for Questions:}$ 1. The grid is not required for model inference and is only needed for AF computation to determine the next query point. We have discussed the computational cost of BKTF with grid and random selection-based AF strategies. We have added BKTF with random as the reviewer suggested, denoted as BKTF-random, the results on benchmark functions are given in Table A and Figure A in PDF. 2. Following the suggestion, we have added additive GP with the sum of two 1st-order additive kernels as the baseline in the experiments, see Table A and Figure A. 3. We have considered Bayesian neural network-based approaches as the baselines. Given the time limit, we do not finish the comparison experiments right now. Nevertheless, based on the results from the RoBO paper as you suggested, see ANN/NGTO and RF results in Table A, Bayesian neural networks and other non-GP approaches perform worse than GP surrogates in low dimensional problems. $\textbf{for Limitations:}$ As mentioned, we have discussed the effects of the grid for this work as much as we can. --- Rebuttal 2: Title: Thank for the thoughtful response Comment: I am happy to hear about the extra baselines and the discussion of practicalities do make the paper stronger, although I cannot see any results if the author would be able to share a plot or results table in this discussion I would be happy to increase my score. --- Rebuttal Comment 2.1: Title: Thank you and updated results Comment: Thank you for your reply! Sure. We attached a one-page pdf in the global response (should be visible now), where a ***figure*** and a ***table*** for the updated results are included. The table is also given below: | Function ($D$) | GP $\alpha_{\text{EI}}$ | GP $\alpha_{\text{UCB}}$ | GPgrid $\alpha_{\text{EI}}$ | GPgrid $\alpha_{\text{UCB}}$ | additive GP | ANN/DNGO | RF | BKTF-random | BKTF-grid | |----------------|-------------------------|--------------------------|-----------------------------|------------------------------|---------------|-----------------------------|---------------|------------------------|------------------------| | Branin (2) | 0.01$\pm$0.01 | 0.01$\pm$0.01 | 0.31$\pm$0.62 | 0.24$\pm$0.64 | 0.05$\pm$0.09 | $\approx$0.025/$\approx$0.4 | $\approx$0.99 | $\textbf{0.00}\pm$0.00 | $\textbf{0.00}\pm$0.00 | | | $\approx$44 | $\approx$42 | $\approx$23 | $\approx$36 | $\approx$100 | $\approx$100/$>$100 | $>$200 | $\approx$47 | $\approx\textbf{4}$ | | Damavandi (2) | 2.00$\pm$0.00 | 2.00$\pm$0.00 | 1.60$\pm$0.80 | 2.00$\pm$0.00 | 2.00$\pm$0.00 | - | - | 0.60$\pm$0.92 | $\textbf{0.00}\pm$0.00 | | | - | - | - | - | - | - | - | $\approx$48 | $\approx\textbf{5}$ | | Schaffer (2) | 0.02$\pm$0.02 | 0.02$\pm$0.02 | 0.10$\pm$0.15 | 0.09$\pm$0.07 | 0.03$\pm$0.03 | - | - | $\textbf{0.00}\pm$0.00 | $\textbf{0.00}\pm$0.00 | | | $\approx$36 | $\approx$44 | $>$50 | $>$50 | $\approx$43 | - | - | $\approx$54 | $\approx\textbf{22}$ | | Griewank (3) | 0.14$\pm$0.14 | 0.25$\pm$0.10 | 0.23$\pm$0.13 | 0.22$\pm$0.12 | 0.10$\pm$0.09 | - | - | $\textbf{0.00}\pm$0.00 | $\textbf{0.00}\pm$0.00 | | | $>$100 | $>$100 | $>$100 | $>$100 | $\approx$100 | - | - | $\approx$47 | $\approx\textbf{43}$ | | Griewank (4) | 0.10$\pm$0.07 | 0.19$\pm$0.12 | 0.38$\pm$0.19 | 0.27$\pm$0.17 | 0.13$\pm$0.11 | - | - | $\textbf{0.00}\pm$0.00 | $\textbf{0.00}\pm$0.00 | | | $>$100 | $>$100 | $>$100 | $>$100 | $>$100 | - | - | $\approx$87 | $\approx\textbf{68}$ | | Hartmann (6) | 0.12$\pm$0.07 | 0.07$\pm$0.07 | 0.70$\pm$0.70 | 0.79$\pm$0.61 | 0.48$\pm$0.17 | $\approx$0.14/$\approx$0.21 | $\approx$0.52 | 1.41e-5$\pm$1.73e-5 | $\textbf{0.00}\pm$0.00 | | | $>$100 | $>$100 | $>$100 | $>$100 | $>$100 | $>$200/$\approx$200 | $>$200 | $\approx$154 | $\approx\textbf{60}$ | | Griewank (10) | 0.36$\pm$0.07 | 0.38$\pm$0.10 | - | - | 0.25$\pm$0.30 | - | - | $\textbf{0.00}\pm$0.00 | - | | | $>$200 | $>$200 | - | - | $>$150 | - | - | $\approx\textbf{124}$ | - | The table compares optimization performance on benchmark functions, where ***First row***: $\left|f^{\star}-\hat{f}^{\star}\right|$ when $n=N$ (mean$\pm$std.); ***Second row***: Average costs for the number of evaluations to find the optimum. The results from more baselines and on 10D Griewank function are given. The results of ANN/DNGO (Bayesian neural networks related methods) and RF(random forest) are from Figure 3.3 in the reference paper [R1]. A ***figure*** showing the detailed optimization results is provided in the ***pdf*** attached in the global response (response to all), please download the pdf file and refer to ***Figure A*** for more results. We will add the baselines and results in the revised paper, thank you again for your comments. [R1]: A. Klein, “Efficient bayesian hyperparameter optimization,” Ph.D. dissertation, Dissertation, Universit ̈at Freiburg, 2020.
Summary: The paper presents a new surrogate model called Bayesian Kernelized Tensor Factorization (BKTF) for Bayesian Optimization (BO).The BKTF model approximates the solid in the D-dimensional space using a fully Bayesian low-rank tensor CP decomposition. It uses Gaussian process (GP) priors on the latent basis functions for each dimension to capture local consistency and smoothness. This formulation allows sharing of information not only among neighboring samples but also across dimensions. The paper proposes using Markov chain Monte Carlo (MCMC) to efficiently approximate the posterior distribution. ). The paper demonstrates the effectiveness of BKTF through numerical experiments on standard BO test functions and machine learning hyperparameter tuning problems. Strengths: 1. One of the significant strengths of the paper is the novel and reasonable solution of incorporating the idea of tensor decomposition into Bayesian Optimization (BO). This approach allows for a more efficient and effective representation of the D-dimensional Cartesian product space, enhancing the performance of BO. The adoption of tensor decomposition represents a significant advancement in the field and demonstrates the authors' innovative thinking. 2. The usage of two-layer GPs is impressive. This approach is clever as it allows for the sharing of information among neighboring samples and across dimensions, enhancing the model's ability to capture local consistency and smoothness. Weaknesses: 1. The cost of several cascaded full GPs may be high, especially for cases with large nodes (refer to "coordinate" in paper) at some dimension. More discussions are encouraged on the scalability analysis or the possible solutions, such as sparse GP, to reduce the cost. 2. As the tensor rank R is always a crucial hyperparameter for tensor decomposition. I'm curious about how the rank setting could influence the BO. It will be great if the authors could give some comments or results on why R=2 is sufficient for the model setting. Technical Quality: 3 good Clarity: 3 good Questions for Authors: See Weakness Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: See Weakness Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your positive and detailed review! We are very glad and appreciate that you agree with the novelty of this work. We explain the weaknesses below, and hope the answers will address your concerns. 1. For the model scalability, we have discussed the computational cost of BKTF in terms of model inference and AF computation in the response to all. As we mentioned, the time cost of model inference for BKTF should be the same as GP inference ($\mathcal{O}\left(n^3\right)$, $n$ is the number of observable data points) if using point-wise sampling, and could be better than GP when $n$ becomes larger by utilizing the low-rank factorization, which is $\mathcal{O}\left(\sum_{d=1}^{D}|S_d|^3\right)$. For data off the grid, we can indeed leverage sparse GP and inducing points/grid for fast inference; similar to the idea used in KISS-GP. Nevertheless, for BKTF, we would like to highlight that the main cost is caused by enumeration-based AF computation when selecting the next query point, which increases exponentially with the dimensions. We believe this is an important follow-up research question. A possible solution is to randomly select certain candidate points in the defined space for AF instead of reconstructing the whole data tensor. For the experiments in the current paper, BKTF with grid-based AF query strategy is acceptable, we will give the running time per function evaluation of different models on the tested benchmark functions later. We have also evaluated a higher dimensional 10D benchmark function and considered random discretization for AF of BKTF, denoted as BKTF-random (see the updated results in Table A and Figure A in PDF). The results show that BKTF with randomly selected AF provides an alternative model to handle higher dimensional problems that BKTF-grid is not feasible, and it still obtains the best optimization performance compared with other models. This demonstrates the superior advantage of the underlying BKTF framework. 2. For the rank setting, the reason we choose a small rank i.e., $R=2$ is that the number of available/observable data is very small, e.g., less than 100. In such cases, a small rank is enough to capture the flexibility/correlations of the data and can estimate a surface with adequate uncertainty quantification through highly efficient inference. In addition, since the task in BO is to find the global optimal values with the smallest budget, the accuracy of the reconstructed surface with respect to the true surface is actually not the primary goal. For problems with a large number of observations, one can use a large $R$ or extend BKTF to a non-parametric Bayesian version to automatically learn/adjust rank $R$ based on the data.
Summary: This paper presents a surrogate based on tensor decompositions for approximating complex functions, allowing for Bayesian-style maximization. Numerical experiments show the (slight) superiority of this model over classical Bayesian approaches. However, a limitation of this approach is the small dimensionality of the target functions and the need to use a discrete grid. Strengths: - The proposed algorithm uses a very small budget to find the maximum of complex functions (gradient-free, multimodal). - A good potential for expanding and improving the proposed algorithm. - This article uses a tensor approach for machine learning problems - The presented new algorithm, in my opinion, has quite a lot of possibilities for improvement, and the article itself is complete. Weaknesses: - A small number of numerical examples. - Final accuracy in Fig. 3b better, but very close to the accuracy of the other methods with which the comparison is made. - No comparison with non-Bayesian methods of finding the maximum. Technical Quality: 3 good Clarity: 3 good Questions for Authors: In the line 96 of the text, a random noise $\epsilon_i$ is added to the real values of the approximating function. Did you add the noise during numerical experiments to the model functions (Branin, Schaffer, Griewank, etc). If so, what was the variance of the noise? Did you try to run you algorithm on higher dimension (with less $m_d$ values), or more complex functions which need more budget for convergence? What is the characteristic running time of the proposed algorithm compared to the others mentioned in the article? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Small dimension of functions for which the maximum is searched for. This is due to the fact that AF has to be found by unrolling the tensor from CP to the full format. Thus, one of the main advantages of the CP tensor format, related to overcoming the curse of dimensionality, is not used. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your review! We are really glad that you agree with the potential of this work. The main concern is the small dimensionality of the test functions, i.e., the scalability of the model, which relates to the computational cost and grid assumption. $\textbf{We have explained the computational cost in the response to all}$, and $\textbf{conducted more experiments with more baselines and discussed the running time.}$ We explain the weaknesses, answer the questions, and reply to the limitations one by one below. $\textbf{for Weaknesses:}$ 1. We conducted more experiments. The updated results on benchmark functions are given in Table A and Figure A (see PDF), where a higher dimension 10D Griewank function is tested, and more baselines are considered. Given the time limit, we only give the results on one more function, but we are also testing on more complicated but low dimensional functions (Multi-modal functions with global structure: the Rastrigin Function and Weierstrass Function) and extending the experiments in Section 5.2 (adding baselines and categorical inputs) as well. We believe the conclusions are similar: the proposed BKTF surrogate can achieve superior performance for moderate dimensional BO tasks, particularly with severely limited budgets; we will update the experiments in the later revised paper. 2. We think the reviewer means the classification accuracy results in Figure 3(a), where BKTF (blue lines) is faster to find better hyperparameter combinations for the algorithms on the MNIST dataset. The final accuracy is close because the MNIST dataset we used (1797 samples) is simple and 0.2\% improvement in accuracy is sufficiently better. 3. Actually the GP surrogate-based baselines: i.e., GP/GPgrid $\alpha_{\text{EI}}$, GP/GPgrid $\alpha_{\text{UCB}}$, are not Bayesian methods, since the GP surrogates have analytical solutions for the uncertainty and mean, i.e., acquisition functions (AF), and can be directly applied for BO tasks. We have added an additive GP surrogate with EI as a new baseline model, which is not Bayesian either. If the reviewer is talking about non-GP comparison models, we can look into the work in [R1] (Figure 2) and [R2] (Figure 3.3), in which Bayesian neural network-based BO approaches (e.g., BOHamiANN) and other non-GP-methods-such-as random forest (RF) are also compared on the low dimensional benchmark functions, e.g., 2D Branin Function (also tested in this paper). From the results figures, they clearly said that GP surrogates obtain the best performance among the compared models for low dimensional function optimization. $\textbf{for Questions:}$ 1. We did not add the noise to the model functions. The random noise $\epsilon_i$ is added only for estimation. But it is straightforward to add white noise to the objective model functions since this does not affect the estimation process. In estimation, we assume the data is noisy with a white noise no matter how the real/true data is generated. 2. Yes. We added the experiments on a 10D benchmark function, and are still conducting several other experiments on more complex low-dimensional benchmark functions and extending the experiments in Section 5.2. From the current updated results, we see that clearly the proposed BKTF consistently provides the best performance. 3. We have discussed the computational cost in the response to all. Compared to the GP surrogate baselines, the cost of model inference should be similar when using the point-wise updating strategy, which is $\mathcal{O}\left(n^3\right)$, $n$ is the number of observation points. As for the time cost of AF computation, as we explained in the response to all, when the cost of each BO iteration is trivial compared with the cost of the design experiment itself, the full enumeration (as used in our paper) is acceptable since it provides more efficient solution to the optimization problems. When the cost of BO becomes considerable compared with the design experiment, one should develop more efficient solutions for AF computation, such as using random (see the new BKTF-random results) or local subsets as suggested by the reviewer. For the experiments conducted in the paper, we think the running time per iteration of evaluation for BKTF is acceptable and is similar to the baseline models in low-dimensional problems. Thus the total running time of BKTF for finding the global optima should be better than others since it costs the least number of iterations. Given the time limit, we will give the average running time of different models on the benchmark functions later in the paper. $\textbf{for Limitations:}$ Yes the unrolling of the tensor for AF computation has the scalability issue. We believe this is an important follow-up question to answer when extending the proposed framework for problems with higher dimensionality. Right now we have only evaluated the alternative of using a random subset for AF for higher dimensional problems, see BKTF-random. But we believe the underlying advantage/contribution of BKTF is the same, that is introducing a more flexible and elegant fully Bayesian surrogate that can capture/leverage the global correlations and obtain high-quality uncertainty quantification from the limited data to achieve efficient global search with limited budgets. [R1] A. Klein, S. Falkner, N. Mansur, and F. Hutter, “Robo: A flexible and robust bayesian optimization framework in python,” in NIPS 2017 Bayesian optimization workshop, 2017, pp. 4–9. [R2] A. Klein, “Efficient bayesian hyperparameter optimization,” Ph.D. dissertation, Dissertation, Universit ̈at Freiburg, 2020. --- Rebuttal Comment 1.1: Comment: Thank you for the very detailed response. After reading your answer and discussion with other reviewers ,I am going to keep the positive score 7: Accept.
Rebuttal 1: Rebuttal: Dear reviewers, Thank you for your time and the thorough and invaluable feedback. We appreciate that the contribution of our paper is well perceived and recognized, as reflected in the contribution scores (3, 3, 3, 3). We are also pleased that the concept of introducing a kernelized low-rank factorization model as a surrogate for Bayesian Optimization (BO) resonates with you. In this general response, we would like to highlight two primary areas that have garnered your attention: (a) numerical experiments are not enough, in terms of the tested functions and baseline models; (b) computational cost of the model, related to grid assumption and the scalability issue. We are conducting more experiments on higher dimensional benchmark functions with more comprehensive baselines. However, given the time limit, several experiments are still ongoing and we will update the results in the later version of the rebuttal. Below we will address and clarify some common points from the reviewers. 1. The definition of $\textbf{nonstationary}$ and $\textbf{nonseparable}$ processes. We should provide a brief introduction in the main paper. A stationary covariance function depends only on the distance of the data points and should be invariant to the specific locations. The commonly used SE kernel is stationary as it is determined by $|x-x'|$; a linear kernel is not stationary as the covariance value is location-specific. For additive GP, if all the component kernels are stationary, the final kernel function will still be stationary. A covariance/kernel function is separable if $k\left(\boldsymbol{x},\boldsymbol{x}'\right)=k_1\left(x_1,x_1'\right)k_2\left(x_2,x_2'\right)\cdots k_D\left(x_D,x_D'\right)$, thus implying the independence between the input dimensions. The commonly used SE-ARD kernel is a separable kernel. Stationary and separable kernel functions offer computational advantages; however, they are limited in modeling functions/processes with complex dependency structures. 2. The $\textbf{computational cost}$ of the proposed BKTF surrogate. For the cost of model inference, when the number of observations $n$ is small, we use point-wise sampling, and the computational cost is the same as a full GP $\mathcal{O}\left(n^3\right)$. When $n$ becomes larger, we can update the model based on the latent factors leveraging the low-rank structure, the computational cost is $\mathcal{O}\left(\sum_{d=1}^D|S_d|^3\right)$, which could be more efficient than $\mathcal{O}\left(n^3\right)$. The cost for model inference should be at least better than GP inference. The main cost comes from the computation of AF. To select the next query point, in the current paper we simply unroll the whole data tensor in the defined grid space (i.e., enumeration-based AF) and the costs increase exponentially with the dimensions. A potential solution (as mentioned by the reviewer) is to randomly select candidate points or develop more efficient strategies instead of reconstructing the whole space. In this way, this can alleviate the curse of dimensionality but at the cost of more evaluation budget (BO iterations). We test random discretization/selection, denoted as BKTF-random, on benchmark functions and a 10D Griewank function. The results are consistent with our assumptions: BKTF-random can be applied for higher dimensional problems that cannot be performed with a grid but costs more iterations for low dimensional functions compared with BKTF-grid. When the cost of the experiment itself is highly expensive (e.g., some real-world optimization problems may need multiple hours or days to solve), an additional overhead from the AF computation of a few seconds or minutes is acceptable if a better optimization efficiency is achieved. But when the cost of the AF is not acceptable, we would suggest using more efficient searching strategies. More solid research is also needed to design an AF for BKTF that balances the two construction strategies; we take this as an important future research question. 3. For the $\textbf{experiments}$, we test on a higher dimension 10D Griewank function and are testing other low dimension but complex functions (Rastrigin and Weierstrass Function), and extending the experiments in Section 5.2 as well. In terms of the $\textbf{baseline models}$: we have added: (a) additive GP with two 1st-order additive kernels per dimension (same number of latent functions as BKTF but in a sum-based manner); (b) BKTF with random discretization; (c) non-GP approaches, such as Bayesian neural networks and RF. The updated results are given in Table A and Figure A in PDF; more results will be updated soon. Overall, we believe BKTF provides an elegant framework that can achieve more efficient and stable performance for BO tasks particularly with severely limited budgets. We hope the response can address your concerns. Pdf: /pdf/530368dd965b58dcf52e6c3b93fb9dd42a62fc65.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Spike-driven Transformer
Accept (poster)
Summary: This paper proposes a Spike-driven Transformer that incorporates the spike-driven paradigm of SNNs into the Transformer and is hardware-friendly for neuromorphic chips. The authors use SDSA to transform the multiplication operation between Query, Key, and Value into a combination of mask and sparse addition operations and modify the self-attention structure to achieve linear complexity. As a result, this approach can achieve up to 87.2× lower energy consumption compared to the traditional self-attention method. Furthermore, membrane shortcut is introduced to ensure all spiking neurons communicate via binary spikes. The paper provides insights into implementing each module on neuromorphic chips and achieves competitive results on multiple datasets while reducing energy consumption compared to previous works. Strengths: The authors introduce the Spike-Driven Self-Attention (SDSA) module as a replacement for the self-attention module in the current spiking Transformer. The proposed module exhibits superior performance while significantly reducing energy consumption, resulting in an architecture that is better suited for deployment on neuromorphic chips. Weaknesses: This paper explores the feasibility of implementing Conv, MLP, and Self-attention models driven by spikes on neuromorphic chips. However, it does not delve into the implementation details of membrane shortcuts. This approach involves the direct transmission of membrane potential signals between spiking neurons and may contradict the spike-driven approach discussed in the original text. The effectiveness of a spikformer-based framework with only SDSA, without MS, remains unclear as most experiments in the literature combine both techniques. This weakens the argument for the performance of the spike-driven Transformer proposed in this paper. Although an ablation study was conducted on MS and SDSA in Table 5, it was limited to the CIFAR10/100 dataset and Spiking Transformer-2-512. The results suggest that the use of MS may be crucial to achieve state-of-the-art performance. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: How is the membrane shortcut implemented on a neuromorphic chip? Is it necessary to use it instead of SEW shortcut? The article explains that membrane shortcuts are used to optimize the distribution of membrane potential, but for spiking neurons, wouldn't it be more appropriate to perform such an optimization in the time domain? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 3 good Contribution: 3 good Limitations: No Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your insightful feedback. We have carefully studied your comments and argue that **your concerns can be addressed**. We incorporate the spike-driven paradigm into Transformer. In spike-driven Transformer, there are only sparse additions. This limitation is quite severe, but we still achieved SOTA on ImageNet in SNN. To achieve this, we designed a novel Spike-Driven Self-Attention (SDSA) and tuned shortcuts. **What's more, the SDSA operator is the third class of operators besides spike-driven Conv and MLP, which will inspire future neuromorphic chip designs.** Since this work is the first to incorporate spike-driven into Transformer, it is reasonable to think that it has more room for improvement. **This work will undoubtedly impact the SNN domain in algorithm and neuromorphic chip design. Given the significant technical contributions and importance of this work to the field, we sincerely hope that you will reconsider your rating** > *W1 and Q1*. it does not delve into the implementation details of Membrane Shortcuts (MS). This approach... How is MS implemented on a neuromorphic chip? **A: The MS does not conflict with spike-driven** - **Spike-driven** implies spike-based event-driven, which restricts all multiplications associated with spike tensors to be implemented as additions. - **Spike-driven on a neuromorphic chip.** Spike-driven paradigm is implemented on neuromorphic chips in the form of **addressable algorithms[1]**. The standard protocol for spike communication is the asynchronous Address-Event Representation (AER), from simple point-to-point links to complex networks-on-a-chip. So, spike-driven router occupies an important position in chip design. - **MS on a neuromorphic chip.** The design of a spike-driven router is affected by many factors, e.g., chip architecture, layout, clock, manufacturing process, applications. Let's take Speck of SynSense as an example to illustrate how to implement MS. Speck is an asynchronous chip focused on processing event streams. In Speck [2], Whenever an event arrives at an SNN core with its address information, the corresponding kernel value and destination neuron position are obtained by address searching, the destination neuron states are then asynchronously updated according to the synaptic operation. And, asynchronous spike-driven convolution is independent to the arrival of other input events and cores, the operation can be efficiently parallel distributed for multiple events at different positions. Specific to using MS in Speck, when a spiking neuron receives a spike, the membrane potential must change. Then, another addressing function can be used to pass this membrane potential to the corresponding neuron in the subsequent layer for merging. > *Q2*. Is it necessary to use MS shortcut instead of SEW shortcut? **A:** It depends on the specific situation. We give a very strict restriction that there can only be the addition in the whole Transformer. In this case, the SEW cannot exist because it would bring integer multiplication. If we relax this restriction or there are neuromorphic chips that support integer multiplication, SEW can also exist. > *Q3*. The article explains that membrane shortcuts are used to optimize the distribution of membrane potential, but for spiking neurons, wouldn't it be more appropriate to perform such an optimization in the time domain? **A:** It is indeed feasible to perform membrane potential optimization in the time domain on some small time-domain tasks, e.g., using long short-term memory[3]. But when processing large static datasets, such as ImageNet, where deep SNNs are required, membrane potential optimization using residual connections in the spatial-domain to avoid performance degradation is mandatory[4]. > *W2*. The effectiveness of a spikformer-based framework with only SDSA, without MS, remains unclear... This weakens the argument for the performance of the spike-driven Transformer. The results suggest that the use of MS may be crucial to achieve SOTA. **A:** We tend to understand spike-driven Transformer from the view of MetaFromer[6], which argues that there is general architecture abstracted from Transformers by not specifying the token mixer. Setting the operators in the token mixer will result in different accuracies. So, MS and SEW are changes to the architecture, while SDSA and SSA are adjustments to operators. To verify this, we set $T=4$, structure-8-384 on ImageNet as follows: |Model|SEW+SSA(Base[5])|SEW+SDSA|MS+SSA|MS+SDSA(This work)| |---|---|---|---|---| |Acc(\%)|70.2|68.1|72.7|72.3| We can see that MS brings gain to architecture performance, MS+SSA(72.7) vs. SEW+SSA(70.2), and MS+SDSA(72.3) VS. SEW+SDSA(68.1). And, compared to SSA, SDSA indeed does perform worse. But it should be noted that there is only addition in the proposed SDSA and the complexity is $O(ND)$, while the SSA in [5] contains multiplication (multi-bit integer multiplication and scale operations) and the complexity is $O(ND^2)$. Note, SDSA is a severely restricted operator, but it works pretty well. So, once the basic architecture is determined, the operator has a trade-off between accuracy and energy. We appreciate that you can discuss this issue with us. **This actually points out two directions for further optimization: architecture and spike-driven operators.** --- [1] Bottom-Up and Top-Down Approaches for the Design of Neuromorphic Processing Systems: Tradeoffs and Synergies Between Natural and Artificial Intelligence. In Proceedings of the IEEE, 2023. [2] Event-driven spiking convolutional neural network. WIPO Patent, page WO2020207982A1, 2020. [3] A long short-term memory for AI applications in spike-based neuromorphic hardware. In Nature Machine Intelligence, 2022. [4] Attention Spiking Neural Networks. In IEEE T-PAMI, 2023. [5] Spikformer: When Spiking Neural Network Meets Transformer. In ICLR, 2023. [6] Metaformer is actually what you need for vision. In CVPR 2022. --- Rebuttal Comment 1.1: Comment: Thank you for providing detailed responses and conducting additional experiments in your rebuttal. It is evident that SDSA can significantly reduce computational complexity without using MS, while maintaining competitive performance. Moreover, this approach facilitates easy deployment on neuromorphic chips. However, I still hold reservations regarding the use of voltage as output for membrane shortcut, as it does not align with the mechanisms of SNNs. In summary, I am willing to revise my score from 4 to 5. --- Reply to Comment 1.1.1: Comment: Thank you very much for your endorsement, the discussion with you made us rethink this work carefully. We believe this is very helpful for our future work.
Summary: This paper proposed a Spike-Driven Self-Attention (SDSA) module, which uses Hadamard product, column-wise summation, and spiking neuron layer to replace the matrix multiplication and softmax operation. Experiments on static and neuromorphic image classification demonstrate competitive performance and energy efficiency. Strengths: 1.. The authors proposed a novel form of linear self-attention module, SDSA, which leverages the features of the spikes and increases computational and memory efficiency. 2. The authors show extensive results from different models and different datasets. The performance of accuracy and energy efficiency is very strong. 3. SDSA reduces the computational complexity with a slight accuracy drop, and the authors show some attention maps to validate the effectiveness of the SDSA. Weaknesses: 1. There is still a noticeable drop between the spike-driven transformer and the original transformer. Can the authors show some results of the model with more time steps and report the limitations of the model? 2. Can the authors show the performance and energy consumption as the number of time steps decreases? Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. The ablation study shows the membrane shortcut brings significant accuracy improvement. I slightly doubt the importance of the SDSA. Can the authors try the combination of the membrane shortcut and some attention-free transformers like [1][2] to check if they can also achieve similar performance? [1] J Lee-Thorp et al., “FNet: Mixing Tokens with Fourier Transforms”, NAACL 2022 [2] W Yu et al, “MetaFormer Is Actually What You Need for Vision”, CVPR 2022 Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The paper has few limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your insightful comments. We first list your advice and questions, then give our detailed answers. > *Weakness 1 and 2:* There is still a noticeable drop between the spike-driven transformer and the original transformer. Can the authors show some results of the model with more time steps and report the limitations of the model? Can the authors show the performance and energy consumption as the number of time steps decreases? **A:** Generally, in directly training SNNs, for image classification, the larger the timestep, the higher the performance and inference energy cost, and the training cost also increases rapidly. However, as the timestep increases, the performance gain becomes smaller and smaller. Therefore, this is a trade-off problem. In existing deep SNNs [3,4,5], researchers generally exploit $T=4$ by default. Due to time and resource constraints, We only tested up to $T=6$ on spiking Transformer-384. | Timestep | $T=1$ | $T=2$ | $T=4$ | $T=6$| |---|---|---|---|---| | Acc (\%) | 71.7 | 72.9 | 74.6 | 74.8 | | Power (mJ) | 1.13 | 2.33 | 4.50 | 7.17 | **A major limitation of SNNs is that, although SNNs are theoretically highly energy-efficient, they often require specialized hardware (such as neuromorphic chips [6]) for verification.** Thus, SNNs are in fact a sparse computing paradigm that requires algorithm-hardware co-design. This is why we believe that this paper will have a profound impact on the SNN field. **First, our model raises the performance ceiling in the field of SNNs**. Given the fact that this work is the first to incorporate spike-driven into Transformer, it is reasonable to think that it has more room for improvement. **On the other hand, to the best of our knowledge, the proposed Spike-Driven Self-Attention (SDSA) operator is the third class of existing operators besides spike-driven Conv and MLP. So, we think our work will also have a profound impact on future neuromorphic chip design.** > *Question:* The ablation study shows the membrane shortcut brings significant accuracy improvement. I slightly doubt the importance of the SDSA. Can the authors try the combination of the membrane shortcut and some attention-free transformers like [1][2] to check if they can also achieve similar performance? **A:** **We would be happy to discuss this with you. MetaFormer has inspired us a lot, and we understand your idea.** We organized some experiments, set $T=1$, 100epoch, and spiking Transformer 8-384.The experimental results are as follows: | Model | Baseline(SDSA, This work) | +SSA [7] | +Fourier [1] | +Pooling [2] | |---|---|---|---|---| | Acc (\%) | 61.0 | 63.7 | Not convergent | 41.2 | We suspect that the performance gap between Baseline(SDSA) and +SSA is due to insufficient training. Therefore, we set $T=4$ and 310epoch and retrain. The experimental results are as follows: | Model | Baseline(SDSA, This work) | +SSA [6] | |---|---|---| | Acc (\%) | 72.3 | 72.7 | According to our understanding of MetaFormer and the above experimental results, the following preliminary conclusions can be drawn: (1) **We think that spike-driven Transformer can be considered as a kind of MetaFormer (CAFormer in [8], Conv + ViT). The proposed SDSA is a token mixer.** In this perspective, when we replace the token mixer from SDSA (this work) to SSA[7], the accuracy of both $T=1$ and $T=4$ will improve. However, it should be noted that there is only binary spike-based addition in the proposed SDSA and the computational complexity is $O(ND)$, while the SSA in [7] contains multiplication (multi-bit integer multiplication and scale operations) and the computational complexity is $O(ND^2)$. (2) **Directly employing the Fourier or Pooling operator to replace SDSA as the token mixer does not work well.** The reason has not yet been confirmed. But we think it will be a good perspective to understand and improve spike-driven Transformer in the future. --- [1] FNet: Mixing Tokens with Fourier Transforms, In NAACL, 2022. [2] MetaFormer Is Actually What You Need for Vision. In CVPR 2022. [3] Deep Residual Learning in Spiking Neural Networks. In NeurIPS, 2021. [4] Temporal efficient training of spiking neural network via gradient re-weighting. In ICLR 2022. [5] Attention Spiking Neural Networks. In IEEE T-PAMI, 2023. [6] Towards artificial general intelligence with hybrid Tianjic chip architecture, In Nature, 2019. [7] Spikformer: When Spiking Neural Network Meets Transformer. In ICLR, 2023. [8] Metaformer baselines for vision. In arXiv 2022. --- Rebuttal Comment 1.1: Title: Thank you for addressing our questions Comment: Spike Transformer: The authors have addressed our questions with more experimental results. We have raised our score to 7. --- Reply to Comment 1.1.1: Comment: Thank you so much for your endorsement, your comments have inspired us and we are happy to discuss it with you.
Summary: This paper proposes an improved spike transformer by replacing Spike-Element-Wise shortcut in an existing spike transformer (ref [20]) with Membrane Shortcut from spike ResNet (ref [26]) . Strengths: Match SOTA ImageNet top-1 precision achieved by ResNet [26] with slightly lower estimated energy consumption. Weaknesses: Combining known techniques from two papers without modification is considered an incremental improvement. It is not clear why Transformer is a better choice than ConvNet in the context of Spike Neural Network. In ViT papers, the motivation for the Transformer is its model capacity can scale easily to handle large datasets and it has much better parallelism on TPU/GPU than ConvNet. None of these has been indicated as the goal for SNN. As shown in table-2, the ResNet SNN model in ref[27] is equally competitive as the proposed model. The notation is a bit confusing in a few equations. The s in Eq (5) is binary. This should be distinguished from floating-point numbers in Eq (4). But the same symbol R is used for both. The same observation holds for Eq (8)(10)(11). Technical Quality: 3 good Clarity: 2 fair Questions for Authors: The outputs of LIF spiking neurons at all timesteps are used as the inputs to the Transformer. This may be the tested method in [20], but why is this a better idea than using the output at the last timestep? As shown in table-2, the power is directly proportional to the timestep count. So this seems a critical parameter to optimize over. In Eq (12), S_L has the shape TxNxD. So outputs at all timesteps are used in classification head? In Eq (6), the input s has no 2d spatial dimensions. What are the 2d dimensions in Conv2d()? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 4 excellent Limitations: None Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your insightful feedback. We first list your advice and questions, then give our detailed answers. **And, we really hope that you would re-consider your rating, given this work's notable contribution to the SNN field.** > *Summary and Weakness 1*: This paper proposes an improved spike transformer by replacing Spike-Element-Wise shortcut in an existing spike transformer (ref [20]) with Membrane Shortcut from spike ResNet (ref [26]). Combining known techniques from two papers without modification is considered an incremental improvement. **A**: We would like to discuss the contribution and significance of this work with you in depth. (1) **Goals and status quo in SNN.** The ambition of SNNs is to be a low-power alternative to ANNs. The key to low-power of SNN is the **spike-driven**. There are **two main hurdles** in achieving this goal: First, there is a performance gap between SNNs and ANNs; Second, the low-power of SNNs can only be truly realized when run on a neuromorphic chip, thus SNNs are in fact a computing paradigm that requires algorithm-hardware co-design. (2) **Technical Contributions.** We incorporate the spike-driven paradigm into Transformer by the proposed Spike-driven Transformer. - **SDSA operator.** There are only two types of operators in SNN, spike-driven Conv and MLP. **SDSA is the first spike-driven Self-attention operator** that implements the self-attention using mask and addition without any multiplication (e.g., softmax and scale). SDSA is computationally linear in both tokens and channels. Its energy cost is $87.2\times$ lower than vanilla self-attention. - **Rearrange shortcuts.** We use membrane potential shortcuts to avoid integer (multi-bit spikes). - **Spike-driven Transformer** has only sparse addition and achieves SOTA performance on ImageNet. (3) **Significance.** - **Energy efficiency.** Compared with the prior SNNs with the same parameter amount, spike-driven Transformer has higher performance but lower energy consumption. Compared with the ANN counterpart, the energy efficiency of spike-driven Transformer is as high as $36.7 \times$. - **Performance.** Achieved a breakthrough from 0 to 1, we incorporated the spike-driven paradigm into Transformer and achieved SOTA results. There is huge room for future performance improvement. - **Algorithms drive the hardware design.** SDSA is the first spike-driven self-attention operator, which is expected to advance the design of next-generation neuromorphic chips. **Thus, the contribution of this work is by no means a mere substitution of shortcuts on the basis of prior work. We can confidently say that this work will change the field of SNNs in terms of algorithm and neuromorphic chip design.** > *Weakness 2*: It is not clear why Transformer is a better choice than ConvNet in the context of Spike Neural Network. In ViT papers, the motivation for the Transformer is its model capacity can scale easily to handle large datasets and it has much better parallelism on TPU/GPU than ConvNet. None of these has been indicated as the goal for SNN. As shown in table-2, the ResNet SNN model in ref[27] is equally competitive as the proposed model. **A**: This is an important issue. In fact, the proposed architecture is a Conv and ViT hybrid network to extract information in a local-global manner. This is somewhat similar to MetaFormer. - **MetaFormer[1,2]** argues that there is general architecture abstracted from Transformers by not specifying the token mixer. Setting the operators in the token mixer to Identity mapping, Conv, MLP, or Self-attention will result in different accuracies. The results show that the Self-attention, which can extract global information, can achieve the highest performance. - **Spike-driven Transformer** can be considered as a kind of MetaFormer (CAFormer in [2], Conv + ViT). The proposed novel SDSA is a token mixer. We argue that from the perspective of MetaFormer, SDSA operators bring global information to the network based on the infrastructure. Interestingly, the attention SNN in ref[27] also adds a global attention module on vanilla Res-SNN. In contrast to Multiply-and-Accumulate attention module in ref[27], the global SDSA operator is purely additive. Given the fact that this work is the first to incorporate spike-driven into Transformer, it is reasonable to think that it has more room for improvement. > *Weakness2,Q3*: The notation is a bit confusing in a few equations. The s in Eq (5) is binary. This should be distinguished from floating-point numbers in Eq (4). But the same symbol R is used for both. The same observation holds for Eq (8)(10)(11). In Eq (6), the input s has no 2d spatial dimensions. What are the 2d dimensions in Conv2d()? **A**: We apologize for the confusion caused by the imprecise notation. In Eq.(6), before executing $Conv2d(\cdot)$, $s\in \mathbb{R}^{T\times N\times D}$ will be transposed into $s\in \mathbb{R}^{T\times C\times H \times W}$. > *Q1,2*: The outputs of LIF spiking neurons at all timesteps are used as the inputs to the Transformer. This may be the tested method in [20], but why is this a better idea than using the output at the last timestep? As shown in table-2, the power is directly proportional to the timestep count. So this seems a critical parameter to optimize over. In Eq (12), S_L has the shape TxNxD. So outputs at all timesteps are used in classification head? **A**: Yes, in SNN, it is a default operation to use the output of all timesteps for classification [3], because it is more accurate. Generally, for image classification, the larger the timestep, the higher the performance and inference energy cost, and the training cost also increases rapidly. Therefore, this is a trade-off problem. --- [1] Metaformer is actually what you need for vision. In CVPR 2022. [2] Metaformer baselines for vision. In arXiv 2022. [3] Temporal efficient training of spiking neural network via gradient re-weighting. In ICLR 2022. --- Rebuttal 2: Comment: Thanks for the clarification. I have increased my ratings. --- Rebuttal Comment 2.1: Comment: I sincerely appreciate your constructive comments. But I noticed that you don't seem to be improving the rating of this paper as you said in official comment. I wonder if there was some misunderstanding here, could you please check it again?
Summary: The authors propose a variant of transformer networks with spiking neurons based on the LIF neuron. The submission reformulates the self-attention to use sparse addition and masking, and modifies the residual connections to transmit information in the domain of membrane potentials. They achieve a state of the art result on Imagenet. They present an energy analysis. Strengths: It increases the amount of binary spike-based computations in the transformer. The experimental results are strong. A small ablation study is performed. Weaknesses: The explanation for equations 15 and 16 can be improved The explanation of power estimates can be improved. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Can the authors provide an additional explanation for equations 15 and 16? Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: not much discussed Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your insightful feedback and your time in reading our paper. Due to space constraints in the paper, the explanations for Eq.(15) and (16) are rough, and we put some details of energy consumption evaluation in the Supplementary Material. After careful inspection, we found that there is a typo in Eq.(16), we accidentally wrote $V^{i}$ instead of $Q^{i}$. We're sorry for this and hope that the responses below can answer your concerns. > *Question*: Can the authors provide an additional explanation for equations 15 and 16? **A**: Here we first briefly introduce three typical attention mechanisms in existing Transformers from the perspective of computational complexity: Vanilla Self-Attention(VSA) [1], Linear Attention [2], Hydra Attention [3]. Then, we introduce the proposed Spike-Driven Self-Attention (SDSA) and analyze its computational complexity. *** We can understand self-attention from the perspective of computational complexity. Specifically, two matrix multiplications between float-point $Q$, $K$, $V$ in $\mathbb{R}^{N\times D}$ are included in the VSA, where $N$ is the token number, $D$ is the channel dimension. Generally, VSA performs multi-head self-attention, i.e., divide $Q$, $K$, $V$ into $H$ heads in the channel dimension. In the $i$-th head, $Q^{i}$, $K^{i}$, $V^{i}$ in $\mathbb{R}^{N \times D/H}$. After the self-attention operation is performed on the $H$ heads respectively, the outputs are concatenated together. (1) Vanilla self-attention [1]. $Q$ and $K$ are matrix multiplied first, and then their output is matrix multiplied with $V$. The computational complexity of VSA is $O(N^2D)$, which has a **quadratic** relationship with the toke number $N$. (2) Linear attention [2,3]. $K$ and $V$ are matrix multiplied first, and then their output is matrix multiplied with $Q$. The computational complexity of linear attention is $O(ND^2/H)$, which has a **linear** relationship with the toke number $N$. (3) Hydra attention [3]. Consider an extreme case in linear attention, set $H=D$. That is, in each head, $Q^{i}$, $K^{i}$, $V^{i}$ in $\mathbb{R}^{N \times 1}$. Then the computational complexity of hydra attention is $O(ND)$, which has a **linear** relationship with both the toke number $N$ and the channel dimension $D$. *** (4) **Spike-driven self-attention (This work)**. We show that SDSA has the same computational complexity as hydra attention (Lines179-184), i.e., $O(ND)$. We here assume $T=1$ for mathematical understanding. Note, in SNN, a spike neuron layer $SN(\cdot)$ first converts $Q$, $K$, $V$ into spike tensor $Q_{S}$, $K_{S}$, and $V_{S}$. The proposed SDSA Version 1 (SDSA-V1) is executed as: $SDSA(Q, K, V)$ = $g(Q_{S}, K_{S}) \otimes V_{S}$ = $SN(SUM_{c}(Q_{S} \otimes K_{S})) \otimes V_{S}$ (14) where $\otimes$ is the Hadamard product, $g(\cdot)$ is used to compute the attention map, $SUM_{c}(\cdot)$ represents the sum of each column. The outputs of both $g(\cdot)$ and $SUM_{c}(\cdot)$ are $D$-dimensional row vectors. The Hadamard product between spike tensors is equivalent to the mask operation. **Since the Hadamard product among $Q_{S}$, $K_{S}$, and $V_{S}$ can be exchanged**, Eq.(14) can also be written as (SDSA-V2): $SDSA(Q, K, V)$ = $Q_{S} \otimes g(K_{S}, V_{S})$ = $Q_{S} \otimes SN(SUM_{c}(K_{S} \otimes V_{S}))$ (15) In Eq.(15), $K_{S}$ and $V_{S}$ participate in the operation first, **thus it is a kind of linear attention. Further, we consider the special operation of Hadamard product.** Specifically, the output of $SUM_{c}(K_{S} \otimes V_{S})$ is a $D$-dimensional row vector, and the value of the $i$-th element in $SUM_{c}(K_{S} \otimes V_{S})$ is $D_{i}$, then $D_{i}$ = $SUM_{c}(K_{S}^{i} \otimes V_{S}^{i})$ = $(K_{S}^{i})^{\rm{T}} \odot V_{S}^{i}$ = $SN(K^{i})^{\rm{T}} \odot SN(V^{i})$, where $\odot$ is the dot product operation, $K_{S}^{i} = SN(K^{i})$ and $V_{S}^{i} = SN(V^{i})$ are the $i$-th column vectors in $K_{S}$ and $V_{S}$, respectively. **In simple terms, taking the Hadamard product of two column vectors $a$ and $b$ and summing them is equivalent to multiplying $b$ times the transpose of $a$, i.e., $SUM_{c}(a \otimes b) = a^{\rm{T}} \odot b$**. Therefore, we can get the V2 version of SDSA: $SDSA(Q^{i}, K^{i}, V^{i})$ = $SN(Q^{i})g(K^{i}, V^{i})$ = $SN(Q^{i})SN(SUM_{c}(K_{S}^{i} \otimes V_{S}^{i}))$ = $SN(Q^{i}) SN(SN(K^{i})^{\rm{T}} \odot {SN}(V^{i}))$ (16) where $Q^{i}, K^{i}, V^{i}$ in $\mathbb{R}^{N\times 1}$ are the $i$-th vectors in $Q, K, V$ respectively. The output of $g(K^{i}, V^{i})$ is a scalar, 0 or 1. Since the operation between $SN(Q^{i})$ and $g(K^{i}, V^{i})$ is a mask, the whole SDSA only needs to calculate $D$ times for $g(K^{i}, V^{i}) = SN(D_{i})$. Note that only $N$ additions need to be performed in $D_{i}$ = $SUM_{c}(K_{S}^{i} \otimes V_{S}^{i})$. Thus, the computational complexity of SDSA is $O(0+ND)$, which is linear with both $N$ and $D$. Vectors $K_{S}^{i}$ and $K_{S}^{i}$ are very sparse, typically less than 0.01 (see Supplementary Material). Together, the whole SDSA only needs about $0.02ND$ times of addition. *** [1] Attention is all you need. In: NeurIPS (2017) [2] Transformers are RNNs: fast autoregressive transformers with linear attention. In: ICML (2020) [3] Hydra attention: Efficient attention with many heads. In: ECCV (2022) --- Rebuttal Comment 1.1: Title: why is the right hand side in eq (16) equal to the right hand side in eq (14) ? Comment: Thank you for writing the explanation, however this still raises doubts: eq 16, rhs is: $SN(Q^i) \cdot SN ( SN(K^i)^\top \odot SN(V^i) ) $ eq 14 rhs is: $ SN ( SN(K^i)^\top \odot SN(Q^i) ) \cdot SN(V^i) $ the hadamard product can be exchanged: $Q_s \otimes K_s \otimes V_s = (Q_s \otimes K_s ) \otimes V_s = Q_s \otimes (K_s \otimes V_s) $, but not when there are thresholding non-linearities applied in between. As far as the reviewer understands, SN is a thresholding operation resulting in 1 or 0. So something needs to be proven to show that $SN ( SUM_c Q_s \otimes K_s ) \otimes SN( V_s) = SN(Q_s) \otimes SN ( SUM_c K_s \otimes V_s) $ This could be due to the fact that the summings are all non-negative, but even then the threshold should be at 1 or below. if the threshold is above 1, then this equality might not hold anymore. Can you please clarify ? --- Reply to Comment 1.1.1: Title: Eq.(14) and Eq.(15) are functionally equivalent. Eq.(15) and Eq.(16) are mathematically equivalent Comment: Thank you very much for your insightful comments. The expression associated with Eq.(15) is not rigorous. **Eq.(14) and Eq.(15) are indeed not equivalent mathematically, but they are equivalent functionally.** **SDSA-V1.** Given a spike input feature sequence $S \in \mathbb{R}^{T\times N\times D}$, float-point $Q$, $K$, and $V$ in $\mathbb{R}^{T\times N\times D}$ are calculated by three learnable linear matrices, respectively. A spike neuron layer $SN(\cdot)$ follows, converting $Q$, $K$, $V$ into spike tensor $Q_{S}$, $K_{S}$, and $V_{S}$. SDSA-V1 is presented as: $SDSA(Q, K, V)=g(Q_{S}, K_{S})\otimes V_{S}= SN(SUM_{c}(Q_{S} \otimes K_{S})) \otimes V_{S},$ (14) **Since $Q_{S}, K_{S}, V_{S}$ are generated by the same type of function (a linear transformation) from the same input $X$ and there is no softmax in Eq.(14), $Q_{S}, K_{S}, V_{S}$ are no longer Query, Key, Value with clear meaning. Therefore, we can let $K_{S}$ be the Query matrix and $V_{S}$ be the Key matrix.** This is Eq.(15) below: $SDSA(Q, K, V)=Q_{S} \otimes g(K_{S}, V_{S}) = Q_{S} \otimes SN(SUM_{c}(K_{S} \otimes V_{S}))$, (15) **In summary, Eq.(14) and Eq.(15) are functionally equivalent. Eq.(15) and Eq.(16) are mathematically equivalent (please see previous reply).** For rigor, we will revise Lines 177-178 in the main text. Original Lines 177-178: ``Since the Hadamard product among $Q_{S}$, $K_{S}$, and $V_{S}$ in $\mathbb{R}^{N\times D}$ can be exchanged, Eq.(14) can also be written as: $SDSA(Q, K, V)=Q_{S} \otimes g(K_{S}, V_{S}) = Q_{S} \otimes SN(SUM_{c}(K_{S} \otimes V_{S}))$ " Revised Lines 177-178: ``***Since we exploit the Hadamard product and the sum function to calculate the similarity, and $Q_{S}$, $K_{S}$, and $V_{S}$ are generated in exactly the same way, Eq.(14) is functionally equivalent to the following formula***: $SDSA(Q, K, V)=Q_{S} \otimes g(K_{S}, V_{S}) = Q_{S} \otimes SN(SUM_{c}(K_{S} \otimes V_{S}))$ " We would also like to explain to you why we have given Eq.(15) and Eq.(16). A hallmark of linear Transformer[1,2] is that Key matrix and Value matrix are computed first, followed by Query matrix. We want to give readers such a formal intuition that the proposed SDSA is a kind of linear attention. In the code implementation, there is no difference in performance between Eq.(14) and Eq.(15). Thank you again for your careful review and we will make changes accordingly. --- [1] Transformers are RNNs: fast autoregressive transformers with linear attention. In: ICML (2020) [2] Hydra attention: Efficient attention with many heads. In: ECCV (2022)
null
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Should We Learn Most Likely Functions or Parameters?
Accept (poster)
Summary: This paper investigates an important yet neglected question in machine learning: should we train the model in the parameter space or function space? The authors show the benefits and shortage of function space MAP estimation which could provide a guide for practitioners. Last but not least, the authors propose a scalable approximation for function space MAP estimation for deep learning. Strengths: - Thorough analysis of the pros and cons of parameter space MAP estimation and function space MAP estimation - Propose a scalable approximation for large neural networks which extends the application area of function space MAP estimation - Well written and easy to follow Weaknesses: - I wonder how universal are the pros and cons demonstrated in the paper, considering they're being shown in a relatively simple toy data set - I understand the author is not claiming L-MAP works well in all cases, but I am still interested to see the performance of L-MAP on tasks other than classification Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: See weakness above Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: See weakness above Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your thoughtful and constructive questions and suggestions! We were pleased that you found our manuscript to be **"well-written and easy to follow"** and that you highlighted our **"thorough analysis"** of the pros and cons of PS-MAP and FS-MAP. We address your questions and comments below. Please let us know if you have any remaining questions. --- > I wonder how universal are the pros and cons demonstrated in the paper, considering they're being shown in a relatively simple toy data set Thank you for this important question. The goal of our experiments on synthetic datasets was precisely to probe the pros and cons of FS-MAP in a well-controlled setting and the results are highly in line with the expectation from our theoretical analysis. We also conducted extensive evaluations with neural networks on datasets commonly used in deep learning. Moreover, we have run new experiments on UCI regression and transfer learning to broaden our evaluations, which can be found in our general response. Finally, we added a discussion on a comprehensive set of criteria for when FS-MAP and L-MAP should or shouldn't be expected to outperform PS-MAP in more general settings. --- > I understand the author is not claiming L-MAP works well in all cases, but I am still interested to see the performance of L-MAP on tasks other than classification We appreciate the suggestions for additional evaluations and ran additional experiments to compare L-MAP and PS-MAP on UCI regression datasets and transfer learning on CIFAR-10. ### UCI regression We found that L-MAP outperformed PS-MAP on 7 out of 8 datasets according to normalized test RMSE, showing L-MAP can also benefit generalization on many regression tasks. | Dataset | L-MAP | PS-MAP | |:----------|:------------------|:------------------| | Boston | 0.352 ± 0.040 | **0.329 ± 0.033** | | Concrete | **0.261 ± 0.013** | 0.272 ± 0.016 | | Energy | **0.041 ± 0.002** | 0.042 ± 0.003 | | Naval | **0.018 ± 0.002** | 0.032 ± 0.005 | | Power | **0.218 ± 0.005** | 0.219 ± 0.006 | | Protein | **0.580 ± 0.005** | 0.584 ± 0.005 | | Winered | **0.792 ± 0.031** | 0.851 ± 0.029 | | Winewhite | **0.714 ± 0.017** | 0.758 ± 0.013 | We adopt the following setup: we use a 3 hidden layer MLP with 256 units and tune the weight decay (prior variance) and Laplacian regularization strengths on a validation set. We standardize both the inputs and targets and report the mean and standard error of test RMSE across six different runs. For L-MAP we set $p_X = \mathcal{N}(0, I)$ since the inputs were standardized and fairly low-dimensional. ### Transfer Learning from ImageNet to CIFAR-10 Using a ResNet18 pre-trained with ImageNet, we tested the performance on further fine-tuning on CIFAR-10 with samples from CIFAR-100 as the evaluation set for 5 epochs. As with training from scratch, we still use a batch size of 128 and an evaluation points size of 128. | Method | Acc. | Sel. Acc. | Avg. NLL | ECE | | ----- | ----- | ----- | ----- | ----- | | PS-MAP | 95.3% | 99.5% | 0.14 | 1.2% | | L-MAP | 95.4% | 99.5% | 0.14 | 1.0% | We find L-MAP is able to perform marginally better while improving the calibration of the classifier slightly. --- Please let us know if you have any further questions. --- Rebuttal Comment 1.1: Comment: Thank you for the response, it answered my questions and concerns. --- Reply to Comment 1.1.1: Comment: Thank you for your feedback and for recommending acceptance of our submission! We will take your comments and suggestions into account when updating the manuscript. --- Reply to Comment 1.1.2: Comment: Thank you again for supporting acceptance of our submission! Your feedback and suggestions were very valuable and have helped us further improve our submission, and we will include the additional results and clarifications from our rebuttal in the revised manuscript. If you agree that the suggested clarifications and new results improved our submission, we would be grateful if you would consider raising your score to reflect these additions. Thank you for your time and effort!
Summary: This work investigates the task of performing inference over model functions rather than model parameters. In particular, the authors propose an objective function *function-space MAP (FS-MAP)*, which intuitively is the usual MAP objective where the prior term is taken over functions rather than parameters. Defining this function-space prior is fundamentally challenging, and to do so, the authors generalize the techniques of Wolpert (1993) to consider functions observed on (possibly) infinite sets. The authors perform an in-depth empirical evaluation of this proposed objective on several synthetic and real-world benchmarks. In addition, a scalable approximation to the objective is proposed in Section 4.3. Strengths: - The paper is overall very well-written and clear in its aims and conclusions. - The key ideas of the paper are likely of interest to the community and the findings have the potential to be impactful for future work on function-space inference. - I found the discussion relating the function-space geometry to the sampling distribution in Section 3.2 fascinating, and this was a novel connection that I had not considered before. - The experimental evaluation is generally sound and the conclusions drawn are supported by the evidence provided throughout the paper. - The proposed objective is a non-trivial extension of previous work in this area, and the scalable approximation in Section 4.3 is similarly non-trivial. Weaknesses: - There are several places in the paper lacking rigor and where I believe there to be undressed latent fundamental issues (see the questions section). As noted on Line 96, a fundamental difficulty in function-space inference is defining an appropriate notion of a prior density over infinite dimensional spaces. The methods of this work are obtained via discretization of the functions followed by a heuristic passage to the limit, but it is unclear how this relates to the underlying function-space distributions, which are the true objects of interest. - The empirical results on real-world data (Table 1) do not show benefits of using the FS-MAP objective over the standard PS-MAP objective. - Note that I do not view this necessarily as a weakness of the work itself but rather a potential limitation of the methodology itself, and an honest evaluation of the proposed methodology is valuable. - The paper needs to be better contextualized within the literature, e.g. there is no related work section. See e.g. [1-4] for some potentially relevant works on function-space inference which are potentially of interest. [1] Function-Space Regularization in Neural Networks: A Probabilistic Perspective, Rudner et al., ICML 2023 [2] Understanding Variational Inference in Function-Space, Burt et al., AABI 2020 [3] Generalized Variational Inference in Function Spaces: Gaussian Measures meet Bayesian Deep Learning, Wild et al., NeurIPS 2022 [4] Tractable Function-Space Variational Inference in Bayesian Neural Networks, Rudner et al., NeurIPS 2022 Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: - Under what conditions is Equation (10) well-defined? It was not clear that the integral in this expression is finite without e.g. further assumptions on the model class. (It is straightforward to give examples where the integral is infinite) - Can the infinite-observation objective (Equation (11)) be justified as an MAP estimate? Equation (8) follows from the change-of-variables formula, but it was not clear to me if (or how) Equation 11 was being justified in the same manner, or merely as a heuristic. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: The authors appropriately discuss the limitations of their work within the submission. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your thoughtful and constructive questions and suggestions! We were pleased that find that our submission is **"of interest to the community"** and that our findings **"have the potential to be impactful for future work on function-space inference"**. We were also happy to see that you found the manuscript to be **"very well-written"** and **"clear in its aims and conclusions"**. We address your questions and comments below. Please let us know if you have any remaining questions. --- > [...] The methods of this work are obtained via discretization of the functions followed by a heuristic passage to the limit, but it is unclear how this relates to the underlying function-space distributions You correctly noted that the main difficulty in establishing any connection between our result and the so-called true object of interest involving the density evaluation $p(f_\theta)$ is due to technical challenges in defining the infinite-dimensional probability density function $p(f_\theta)$ in the first place. Section 3.2 establishes the limiting behavior of the finite-point objective derived by Wolpert (which exactly corresponds to the value of the prior probability density function of a distribution over finite function evaluations computed at a given parameterized function $f_\theta$) and thereby avoids explicitly defining $p(f_\theta)$. We expect the evaluation of $p(f_\theta)$ obtained using the infinite-limit expression to agree with an evaluation that could be obtained using a suitably-defined base measure over the function space. A justification for this is provided in Appendix B.2. The technique used in Section 3.2 is referred to as taking the continuum limit and has previously been applied to analogous problems in physics and applied mathematics. For example, in numerical analysis, the solutions of discretized differential equations often converge to the solutions of the continuous equations as the mesh size goes to zero. Similarly, lattice field theories in physics, where the continuous spacetime is discretized into a lattice, has provided consistent results with the continuum approach in many cases. --- > The empirical results on real-world data (Table 1) do not show benefits of using the FS-MAP objective over the standard PS-MAP objective. Thank you for pointing out that further discussion of the results would be useful. Indeed, as we discussed in our general response, we were not expecting FS-MAP or L-MAP to lead to significant improvements in these settings where we do not have a highly informative prior. In contrast, we use a prior that exactly corresponds to the data-generating process in Section 3.3, leading to significant gains by performing FS-MAP. Here, since our Gaussian prior is rather non-informative about the data-generating process in the FashionMNIST and CIFAR-10 experiments, we were pleasantly surprised to find that L-MAP achieves a significant reduction in ECE on CIFAR-10 with $p\_X=$CIFAR-100 and noticeable improvement in selective prediction accuracy on FashionMNIST with all three choices for $p\_X$, though the accuracy is comparable with PS-MAP. --- > Under what conditions is Equation (10) well-defined? It was not clear that the integral in this expression is finite without e.g. further assumptions on the model class. It is true that Equation (10) is not finite for any model. However, for most models, including commonly used neural networks, we can show it is indeed finite. To show the expectation in Equation (10) is finite, it is sufficient to show that the integrand $J_\theta(x)^\top J_\theta(x)$ is bounded for any $x$ in the support of $p_X,$ a sufficient condition for which is that the function $f_\theta$ has bounded Lipschitz constants over the support of $p_X$. Consequently, Equation (10) is finite for any $\theta$ for a wide range of models, e.g., any MLP with standard activation functions (ReLU, tanh). Note this is not a necessary condition, and Equation (10) can still be finite depending on the specifics of $p_X$ and $J_\theta(x).$ --- > Can the infinite-observation objective (Equation (11)) be justified as an MAP estimate? [...] it was not clear to me if (or how) Equation 11 was being justified in the same manner, or merely as a heuristic. As mentioned above, the main difficulty in establishing any connection between our result and the true mode of $p(f_\theta | \mathcal{D})$ lies in technical challenges in the prior density $p(f_\theta)$ in the first place. On the other hand, our result by taking the continuum limit is not merely a heuristic but follows an established approach with a history of successful applications in other areas of mathematics and physics that enable tractable computational methods when the continuum approach presents challenges. --- > The paper needs to be better contextualized within the literature, e.g. there is no related work section. Function-space regularization in NNs is a burgeoning field, and we appreciate the opportunity to further contextualize our work: [1] discuss function-space VI in NNs. They explain in which cases the function-space variational objective proposed in [2] is not well-defined. This pathology is conceptually similar to the pathology identified in our work. [3] propose approximations to make the variational objective proposed by [2] tractable and identify a prior over functions for which the objective is well-defined. Unlike our approach, the sampling distribution in [3] is not part of the probabilistic model but of the approximation. [4] also perform variational inference but they minimize a Wasserstein instead of a KL divergence, and they consider Gaussian process models instead of stochastic NNs. [5] specify a prior over parameters that induce a desirable prior over functions but perform inference in parameter space instead of deriving a function-space objective to avoid the challenges described in our submission. --- Please let us know if you have any further questions. --- Rebuttal 2: Comment: Thank you again for supporting acceptance of our submission! Your feedback and suggestions were very valuable and have helped us further improve our submission, and we will include the additional results and clarifications from our rebuttal in the revised manuscript. If you agree that the suggested clarifications and new results improved our submission, we would be grateful if you would consider raising your score to reflect these additions. Thank you for your time and effort!
Summary: This paper analyzes the question: should we find the parameter that maximizes $p(\theta|D)$ or $p(f_\theta|D)$, given a data distribution $D$. The former is the classic MAP, or PS-MAP ("parameter space") in this work, and the latter is called FS-MAP ("function space"). The paper shows that neither is universally more desirable than the other, and describes pros and cons of both methods: - FS-MAP has the advantage of directly optimizing the quantity we care about (i.e. function value), is invariant to reparameterization, and can be more robust to label noise. - On the other hand, PS-MAP tends to be more stable, and while it is not invariant to reparameterization, it's unclear how much this would be a disadvantage in practice. Moreover, PS-MAP can be closer to the Bayesian model average than FS-MAP. The findings are supported by both theory and experiments. Strengths: - The paper has extensive comparisons on the limitations and applicability of both methods. - The paper addresses practical considerations: - The problem of having a (near) singular Jacobian is addressed by adding perturbations. - For scalability, the problem of requiring two forward steps is addressed by zeroth-order approximation, which requires 1 forward pass only. Weaknesses: While I appreciate the discussions, I'm not sure the evidences are clear enough. - Fig 2 shows that there's no difference between FS-MAP and PS-MAP when the number of samples is around 10k, a sample size that most practical settings can afford. - In Table 1, the results from both methods are very close, often within the margin of standard deviation. Overall, it seems that both methods compare similarly, and it's unclear how much we would lose by simply always choosing PS-MAP. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - Sec 3.3: the choice of prior is important; is there guidance on how to choose a prior in practice? - Is there guidance on how to choose between FS-MAP and PS-MAP given a practical problem? How much would the choice matter (the experiments seem to show that the choice doesn't matter much)? - Fig 2b: why is there an increase in curvature a the number of samples increases? - also, is there a reason for measuring flatness as the average eigenvalue (rather than e.g. max)? Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: The paper clearly discusses the limitations. There is no direct societal impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your thoughtful and constructive questions and suggestions! We were pleased that you highlighted the **"extensive comparison"** of limitations and applicability of PS-MAP and FS-MAP and that the paper addresses **"practical considerations"**. We address your questions and comments below. Please let us know if you have any remaining questions. --- > In Table 1, the results from both methods are very close, often within the margin of standard deviation. Thank you for pointing out that further discussion of the results would be useful. Indeed, as we discussed in our general response, we were not expecting FS-MAP or L-MAP to lead to significant improvements in these settings where we do not have a highly informative prior. In contrast, we use a prior that exactly corresponds to the data-generating process in Section 3.3, leading to significant gains by performing FS-MAP. Here, since our Gaussian prior is rather non-informative about the data-generating process in the FashionMNIST and CIFAR-10 experiments, we were pleasantly surprised to find that L-MAP achieves significant reduction in ECE on CIFAR-10 with $p\_X=$CIFAR-100 and noticeable improvement in selective prediction accuracy on FashionMNIST with all three choices for $p\_X$, though the accuracy is comparable with PS-MAP. --- > Fig 2 shows that there's no difference between FS-MAP and PS-MAP when the number of samples is around 10k, a sample size that most practical settings can afford. Thank you for the observation. This early convergence to low test error for both methods is because the data is generated exactly from our prior, greatly simplifying the task and reducing the sample complexity to a much lower level compared to many real-world applications. Relative to the complexity of this task, FS-MAP still achieves a noticeable improvement in efficiency compared to PS-MAP. --- > Sec 3.3: the choice of prior is important; is there guidance on how to choose a prior in practice? Indeed, Sec 3.3 shows that well-specification of the prior as well as $p_X$ was important to achieving the best performance with FS-MAP. Choosing a good prior often involves exploiting problem-specific inductive biases (e.g., convolutional networks induce a prior that favors translation equivariant functions, suitable for problems in computer vision) and is a fundamental question in machine learning and outside the scope of this work. On the other hand, our work does provide guidance on choosing $p_X$ by revealing its connection to the metric in function space (Section 3.2) and shows choosing $p_X$ to closely approximate the input distribution at test time often works better (Figure 3). --- > Is there guidance on how to choose between FS-MAP and PS-MAP given a practical problem? How much would the choice matter (the experiments seem to show that the choice doesn't matter much)? In general, despite the flatness-favoring properties of FS-MAP, there is no strong reason to expect that either FS-MAP or PS-MAP will generalize better in the absence of a good prior that is highly informative of the data-generating process. When such a prior is available, we expect that the choice between FS-MAP and PS-MAP to be more important and FS-MAP to more likely outperform PS-MAP. We provide a comprehensive discussion on potentially useful criteria we can rely on to choose between FS-MAP or PS-MAP in the general response. --- > Fig 2b: why is there an increase in curvature a the number of samples increases? Thanks for the intriguing observation! There is indeed a slight increase in curvature for FS-MAP as we increase training samples. We hypothesize this is because as we add more training data, the volume of solutions that can fit the data decreases since more constraints need to be satisfied. As a result, it becomes more difficult to not increase the loss as we move away from the minimum. --- > is there a reason for measuring flatness as the average eigenvalue (rather than e.g. max)? We didn't have any strong reason against using max, though taking the mean captures more information about the local geometry as it is aggregated over all dimensions of the high dimensional parameter space, whereas the max eigenvalue only measures the loss curvature in a single direction. --- Please let us know if you have any further questions! --- Rebuttal Comment 1.1: Comment: Thank you very much for the detailed response and for the additional empirical results! I still think that the paper can be further strengthened if the practical takeaway is clearer (since the comparisons are close and mostly at a small scale), but I think the current results are interesting enough for an acceptance, so I'm raising my score. --- Reply to Comment 1.1.1: Comment: Thank you for your feedback and for recommending acceptance of our submission! We will take your comments and suggestions into account when updating the manuscript.
Summary: The paper compares the widely-used parameter estimation of machine learning against function estimation for the maximum a posterior (MAP) estimation. The authors provide detailed analysis and mathematical variation why there are significant difference in results for function vs. paramter MAP, and introduce conditions where the function MAP can avoid its failure modes. To be honest, I am not an expert in the area, and cannot make an assessment how novel the proposed method is against relevant literature. I would rely on other reviewers on the aspect of the evaluation. Strengths: - The paper is fairly well-written in most of cases, and grounds strong motivation and intuition for the proposed idea. Even though I am not expert in the area, I enjoyed reading through the paper. Weaknesses: - The paper is heavy on math. Even though they authors did a decent job to make the equations approachable, some of important definitions are missing, and many of details are deferred to the supplementary material. It is suggested to double-check the completeness of the formulation in the main paper. If it cannot be self-contained, I am afraid the publication should head to another venue with a longer length limit (maybe a journal). For example, P was never introduced, but could be only inferred as the parameter dimension. Also, $\theta_R$ and $\theta_L$ was not clearly defined in Figure 1, which is not desirable considering that it should serve as a strong motivating example. - The results are not very competitive. Especially for Table 1 for MNIST and CIFAR-10. Is there any intuitions why it is the case? On the other hand, graphs in Figures appear quite promissing. I would appreciate more discussion in the paragraph on benchmark performance (line 297-298). Technical Quality: 3 good Clarity: 3 good Questions for Authors: - This might be simply my ignorance. But what is the difference between N samples ($x_\mathcal{D}^{(i)}$) vs. M samples ($\hat{x}$) used in Equation 5? Is it necessary to derive different samples from the space, or can we use the same samples? Roughly how big number would you choose for N and M, respectively? The paper writes condition based on M ($MK \geq P$). But is there any condition on N? Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors clearly stated limitations in extensive aspects. However, it would be informative to have a detailed numbers on the increased numerical complexity or time/ memory requirement for using FS-MAP (or LS-MAP) over PS-MAP for some of famous architecture. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your thoughtful and constructive questions and suggestions! We were pleased that you **"enjoyed reading the paper"** and that you found it to be **"well-written"**. We address your questions and comments below. Please let us know if you have any remaining questions. --- > Even though they authors did a decent job to make the equations approachable, some of important definitions are missing. [...] For example, P was never introduced, but could be only inferred as the parameter dimension. Also, $\theta_R$ and $\theta_L$ was not clearly defined in Figure 1, which is not desirable considering that it should serve as a strong motivating example. Thank you for suggesting clearer definitions of notations. We will clarify that $P$ is the number of parameters and $\theta \in \mathbb{R}^P$ in the preliminaries. We will also clarify that $\theta\_L$ and $\theta\_R$ refers to the parameters controlling the height of the left and right Gaussian, respectively. If you think it would be helpful, we will include a list of definitions in the appendix. --- > The results are not very competitive. Especially for Table 1 for MNIST and CIFAR-10. Is there any intuitions why it is the case? On the other hand, graphs in Figures appear quite promising. I would appreciate more discussion in the paragraph on benchmark performance (line 297-298). Thank you for pointing out that further discussion of the results would be useful. We included such a discussion in our general response and will add it to the manuscript. As discussed in our general response, we were not expecting FS-MAP or L-MAP to lead to significant improvements in these settings where we do not have a highly informative prior. In contrast, we use a prior that exactly corresponds to the data-generating process in Section 3.3, leading to significant gains by performing FS-MAP. Here, since our Gaussian prior is rather non-informative about the data-generating process in the FashionMNIST and CIFAR-10 experiments, we were pleasantly surprised to find that L-MAP achieves significant reduction in ECE on CIFAR-10 with $p\_X=$CIFAR-100 and noticeable improvement in selective prediction accuracy on FashionMNIST with all three choices for $p\_X$, though the accuracy is comparable with PS-MAP. --- > This might be simply my ignorance. But what is the difference between $\mathrm{N}$ samples $(x_{\mathcal{D}}^{(\imath)})$ vs. $M$ samples $(\hat{x})$ used in Equation 5? Is it necessary to derive different samples from the space, or can we use the same samples? Roughly how big number would you choose for $N$ and $M$, respectively? The paper writes conditions based on $M$ ($MK > P$). But is there any condition on N? $N$ is the number of points in the training set $x_{\mathcal{D}}$ and there is no condition based on $N$. $M$ is the number of points in the set of evaluation points $\hat{x}$ for computing the prior $p(f\_{\theta}(\hat{x}))$. $\hat{x}$ and $x_{\mathcal{D}}$ ar generally not the same since $\hat{x}$, being a special case of specifying an evaluation distribution $p_X,$ corresponds to a choice of metric in function space as we show in Section 3.2. Technically, using the same samples is possible, and we investigate setting $\hat{x} = x\_{D}$ in our experiments (see $p\_X = Train$ in Table 1). More generally, however, we wish to set $M$ to be as large as possible to appropriately account for the behavior of the function throughout the entire input space, and Figure 3a shows that FS-MAP indeed can benefit from using larger $M.$ While it is generally intractable to use exceedingly large $M$ as discussed in Section 4.3, there we also present an efficient approximation via L-MAP that doesn't suffer from this intractability by allowing unbiased Monte-Carlo estimates for objectives defined with arbitrarily large $M$. --- > The authors clearly stated limitations in extensive aspects. However, it would be informative to have a detailed numbers on the increased numerical complexity or time/ memory requirement for using FS-MAP (or LS-MAP) over PS-MAP for some of famous architecture. Compared to PS-MAP, L-MAP only requires an additional forward pass on $S$ evaluation points to evaluate the Laplacian regularization term, where $S$ is the number of Monte Carlo samples. This is much faster compared to the exact FS-MAP, as we discuss in Section 4.3 line 291-293. To showcase concrete run times, we train an MLP for 10,000 steps on the Two Moons datsets with a batch size of $200$ and $1600$ evaluation points. The MLP has only 2 hidden layers and 16 units each so that FS-MAP is tractable. L-MAP, finishing in 31 seconds, is 33 times faster than FS-MAP, which took 16 minutes, and only 1.4 times slower than PS-MAP, which took 22 seconds. We also show the time per gradient step and peak memory consumption for ResNet-18: For the scalable experiments with neural networks in Section 4.3, we report the approximate wall clock times between standard PS-MAP and our proposed approximation L-MAP in the table below. All experiments were run with batch size $128$ and $S=128$ Monte Carlo samples for the evaluation points. As expected, L-MAP takes approximately twice the amount of time due to the additional forward pass. Note it is not feasible to run FS-MAP in this case as it requires too much memory. | Dataset | Method | Gradient Step (ms) | Epoch (s) | | ------- | -------- | -------- | -------- | | FashionMNIST | PS-MAP | 54 | 24 | | | L-MAP | 109 | 45 | | CIFAR-10 | PS-MAP | 64 | 27 | | | L-MAP | 126 | 52 | All times are reported based on an NVIDIA TITAN RTX 24 GB GPU, where each run takes approximately 7GB of GPU memory in JAX. --- Please let us know if you have any further questions! --- Rebuttal Comment 1.1: Comment: Thanks for the reply. It cleared most of my concerns. --- Reply to Comment 1.1.1: Comment: Thank you for your feedback and for recommending acceptance of our submission! We will take your comments and suggestions into account when updating the manuscript. --- Reply to Comment 1.1.2: Comment: Thank you again for supporting acceptance of our submission! Your feedback and suggestions were very valuable and have helped us further improve our submission, and we will include the additional results and clarifications from our rebuttal in the revised manuscript. If you agree that the suggested clarifications and new results improved our submission, we would be grateful if you would consider raising your score to reflect these additions. Thank you for your time and effort!
Rebuttal 1: Rebuttal: ## General Response to All Reviewers We thank all reviewers for their feedback, support, and unanimous recognition that our paper is interesting, well-written, and thorough in presenting the pros and cons of the considered approaches. We start by providing a general response aimed at addressing the most common questions from the reviewers. We first present additional experiments and find that L-MAP outperforms PS-MAP on UCI Regression and transfer learning from ImageNet to CIFAR-10 and then present a comprehensive set of criteria on when FS-MAP and L-MAP should or shouldn't be expected to outperform PS-MAP in general settings, explaining why FS-MAP significantly outperforms PS-MAP in some of our experiments but no others. We hope our response addresses the key points and inquiries raised by the reviewers and can be taken into account in the final assessment. ## Additional Experiments on UCI Regression and Transfer Learning We provide additional regression and transfer learning experiments with L-MAP, as suggested by some reviewers. The results are consistent with the results in the manuscript and show that L-MAP tends to perform as well or better than PS-MAP. ### UCI We found that L-MAP outperformed PS-MAP on 7 out of 8 datasets according to normalized test RMSE, showing L-MAP can also benefit generalization on many regression tasks. | Dataset | L-MAP | PS-MAP | |:----------|:------------------|:------------------| | Boston | 0.352 ± 0.040 | **0.329 ± 0.033** | | Concrete | **0.261 ± 0.013** | 0.272 ± 0.016 | | Energy | **0.041 ± 0.002** | 0.042 ± 0.003 | | Naval | **0.018 ± 0.002** | 0.032 ± 0.005 | | Power | **0.218 ± 0.005** | 0.219 ± 0.006 | | Protein | **0.580 ± 0.005** | 0.584 ± 0.005 | | Winered | **0.792 ± 0.031** | 0.851 ± 0.029 | | Winewhite | **0.714 ± 0.017** | 0.758 ± 0.013 | We adopt the following setup: we use a 3 hidden layer MLP with 256 units and tune the weight decay (prior variance) and Laplacian regularization strengths on a validation set. We standardize both the inputs and targets and report the mean and standard error of test RMSE across six different runs. For L-MAP we set $p_X = \mathcal{N}(0, I)$ since the inputs were standardized and fairly low-dimensional. ### Transfer Learning from ImageNet to CIFAR-10 Using a ResNet18 pre-trained with ImageNet, we tested the performance on further fine-tuning on CIFAR-10 with samples from CIFAR-100 as the evaluation set for five epochs. As with training from scratch, we still use a batch size of 128 and an evaluation points size of 128. | Method | Acc. | Sel. Acc. | Avg. NLL | ECE | | ----- | ----- | ----- | ----- | ----- | | PS-MAP | 95.3% | 99.5% | 0.14 | 1.2% | | L-MAP | 95.4% | 99.5% | 0.14 | 1.0% | We find L-MAP is able to perform marginally better while improving the calibration of the classifier slightly. ## When FS-MAP should or shouldn't outperform PS-MAP Many reviewers noted that while FS-MAP leads to noticeably better performance in the synthetic example in Section 3.3, less improvement is observed in neural network experiments on image classification in Section 4.4. We note that these results are exactly in line with our theoretical observation and empirical findings in Section 3.3 that FS-MAP's superior performance depends on a well-specified probabilistic model (prior and likelihood). As one of our key points in the paper, in general, there is no strong reason to expect either FS-MAP or PS-MAP will generalize better since our prior can be arbitrarily different from the true data-generating process. Nevertheless, we now summarize our current best understanding of when FS-MAP should or shouldn't outperform PS-MAP. The example in Section 3.3 exemplifies four criteria that generally favor FS-MAP: 1) The Jacobian is non-singular everywhere and therefore, FS-MAP does not lead to pathological solutions as described in Section 4.1. 2) The likelihood and the prior exactly correspond to the data-generating process. Consequently, decision theory states that Bayesian Model Average (BMA) is the optimal predictor in terms of expected test RMSE. 3) The prior $p(\theta) = \mathcal{N}(\theta | 0, 10^2)$ is diffuse in parameter space but fairly concentrated in function space because each coefficient $\tanh(\theta_i)$ follows a highly bimodal distribution at $-1$ and $1$ as a result of the prior standard deviation being much larger compared to the scale at which $\tanh$ saturates (which is approximately 2). 4) Therefore, FS-MAP will better approximate the Bayesian model average than PS-MAP, following a similar argument in Section 4.2 line 271-274, and therefore achieve better generalization. Correspondingly, there are situations where FS-MAP will likely underperform PS-MAP: 1) When the prior is poorly specified. For example, in a regression setting, if the observation noise is extremely over-estimated, FS-MAP may severely underfit the data because the log determinant regularization can overpower the likelihood. We demonstrate this effect in Appendix B.7, Figure 8. 2) When the log determinant contains singularities. As discussed in Section 4.1, in this case, FS-MAP leads to pathological solutions that ignore the data, though L-MAP mitigates this issue. We hope this response addresses the potential concern that FS-MAP doesn't always outperform PS-MAP, and our thorough and transparent demonstration of the pros and cons of both approaches is viewed as a strength. --- **Thank you for reviewing our work!**
NeurIPS_2023_submissions_huggingface
2,023
Summary: This paper explores the question of learning most likely functions vs most likely parameters in the context of MAP estimation, a common setting including minimizing log likelihood with L1 or L2 regularization, as these regularizations correspond to imposing a prior. Differences in parameterization affect the MAP through change of measure for the prior, and potentially in very unhelpful ways. Because of these potential problems with parameterizations, authors explore doing MAP not on parameters as we usually do (referred to as PS-MAP), but on function outputs for a set of inputs, an idea from Wolpert (1993). Authors refer to this as function-space MAP (FS-MAP). The contributions are as follows: - While Wolpert considered function outputs on a finite set of inputs, authors generalize to function outputs on a measure on input space. Authors emphasize that this measure of input space can be specified according to the task at hand (e.g. restricting to natural images, rather than all images). - Authors empirically compare FS-MAP and PS-MAP on a simple example and empirically find that FS-MAP finds flatter minima, has better generalization in some settings, and produces less overfitting. Authors also discuss how FS-MAP and PS-MAP compare with the posterior mean in variations on this simple setting. Also, FS-MAP’s test performance depends on the measure of inputs that are considered important. - Authors discuss problems with FS-MAP: FS-MAP can be computationally very expensive, and can behave in pathological ways under certain conditions. Authors also provide conditions under which it is well-behaved, as well as a scalable approximation to FS-MAP, called L-MAP, that also behaves ok even when FS-MAP would behave pathologically. Authors empirically compare FS-MAP and PS-MAP on neural networks for image classification. L-MAP performs similarly to PS-MAP, but is better calibrated in one experimental setting, and also finds flatter minima. Strengths: 1. Authors raise and discuss an interesting question that I had not thought about. 2. There is a substantive discussion of PS-MAP and FS-MAP, including weaknesses of both approaches. 3. Authors provide a computationally feasible, well-behaved approximation to a computationally too-expensive and poorly-behaved objective (L-MAP, vs FS-MAP). Weaknesses: 1. There is a tension between statements like “parameters have no meaning independent of the functional form of the models they parameterize” (lines 22-23) and the fact that the difference between best parameters vs best functions is a matter of parameterization of the prior, and the prior is a prior on the parameters (Equation 1). This tension is central to the question of parameterization for MAP and is not addressed. 2. This tension continues into the experiments, many of which emphasize flat minima. Authors cite Dinh’s “Sharp minima can generalize for deep nets”, which points out that the flatness / sharpness of minima is a function of parameterization. 3. Function-space MAP and priors on functions are defined much later in the paper than ideal. For example, in Figure 1, I do not know how FS-MAP is calculated. p_X (important to FS-MAP) is also not specified. 4. The terminology of “most likely functions” and “more probable functions” obscures the dependence of these terms on the choice of p_X (e.g. line 37, 184). 5. The connection to variational inference does not seem particularly strong. MAP as VI would involve choosing the best q, while the construction in Section B.9 does not do that. It’s also unclear to me why we care about these constructions; e.g. why do we care about q, besides that it ultimately leads to (13) and (B.13)? 6. Also, (B.8) looks like a KL divergence, not a variational lower bound / ELBO. Maybe I’m missing something here. 7. The experiments and discussion around experiments could be more thorough. See questions. 8. If someone told me they were looking at most likely functions, rather than most likely parameters for MAP, I would be curious about priors on functions in place of priors on parameters. (This isn't a weakness / question / limitation; it's more of a comment.) Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. On line 135-136, “the distance between two function evaluations is measured locally by [...]” : what two function evaluations are being referred to? 2. Why is it important to have infinitely many evaluation points? I assume that’s not the appeal but that’s how it was advertised. (Lines 127-128) 3. What is p_X=N/A in Figure 3b? 4. Can you provide intuition for why FS-MAP archives better generalization, in the settings where it does achieve better generalization? Can you also provide intuition for why it is less prone to overfitting? Can you discuss when this wouldn’t be the case? (Section 3.3) 5. The numbers in Table 1 are all very similar. What do you think is the reason for this? Should we be surprised by the similarities between the different choices of p_X, given how much p_X mattered in Section 3.3? 6. “For example, while it is 331 natural to assume that FS-MAP more closely approximates a Bayesian model average”: Why is it natural to make this assumption? (lines 330-331) 7. Could you briefly discuss why the posterior mean is particularly important (line 254)? The citation that follows is a textbook. 8. Why do we want to constrain eigenvalues of the Jacobian? I understand that you can run into problems if they are quite large. Is that the only reason? Is there a risk of over-constraining? 9. Why is calibration better with FS-MAP for CIFAR-10 and p_X=CIFAR-100 compared to PS-MAP, but not other settings? 10. Can you explain further why L-MAP is more robust to training with label noise? The explanation in line 325 is Section 3.3, which appears to be about input space, not about label noise 11. Should I expect these comparisons between FS-MAP and PS-MAP in other settings? In what settings should these results hold vs not (beyond satisfying the assumptions)? 12. While Wolpert’s evaluation on finite sets of points is a special case of the authors’ formulation, are they not effectively the same in practical settings? 13. Authors discuss a measure on which to consider function outputs. I am curious about the connections to distribution shift. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The experiments and discussion around experiments could be more thorough. See questions. It's not clear to me which experimental results should generalize, and under what conditions. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We're glad you appreciated the paper and found it well-written. We address your comments below. Please let us know if you have any remaining questions. 1. **When and why FS-MAP can outperform PS-MAP and reduce over-fitting** Thank you for asking these important questions at the heart of this paper. While in general there is little reason to expect either FS-MAP or PS-MAP will perform better, we provide a thorough discussion on important special cases in the general response that explains our empirical findings. In terms of reducing overfitting, a simple reason why FS-MAP can reduce overfitting is the introduction of the additional regularization term which can prevent the model from fitting the data. In addition, fitting noise often requires precise settings of certain parameters to account for unnatural noisy patterns, which would incur a large Jacobian determinant penalty. 2. **Infinitely many evaluation points** When using only a finite set of evaluation points $\hat{x}$ as in Section 3.1, one can only find the MAP estimate for the function evaluated at a finite set of points $f\_\theta(\hat{x})$, which is different from the MAP estimate for the function $f\_\theta(\cdot),$ equivalent to an infinite dimensional vector. As our goal is to find the MAP estimate for the whole function $f\_\theta(\cdot),$ we seek to use an infinite set of evaluation points. In Figure 3a, we also empirically show that FS-MAP performs monotonically better as we use more evaluation points and achieves optimal performance with an infinite set. 3. **Distinction from Wolpert's formulation** Our formulation is different from Wolpert's formulation even when using finite samples, since a random finite set of points is sampled at every gradient step and the underlying $p\_X$ can have support on infinitely many points. 4. **Benchmark Performance**: Our prior in FashionMNIST and CIFAR-10 experiments was non-informative, as a result we do not expect FS-MAP / L-MAP to outperform PS-MAP though it did improve calibration in many cases. In contrast, the improvements in Section 3.3 stemmed from using a prior matching the data-generating process. We additionally evaluate L-MAP on UCI dataset and transfer learning on CIFAR-10 and present the results in the the general response. 5. **Dependence of mostly likely functions on $p_X$** We agree that this is an important subtlety. Indeed, revealing this subtlety and establishing the connection to choosing a metric in function space is a key contribution of this work and we will emphasize it more. We additionally note that such dependence on an underlying metric applies to any MAP estimation problems in continuous spaces, including PS-MAP, as the probability density always depends on a choice of base measure. 5. **FS-MAP approximating BMA** It is natural to assume FS-MAP approximates BMA better due to arguments we presented in line 259-263. Namely, with sufficient amount of data, both $p(\theta | \mathcal{D})$ and $p(f_\theta | \mathcal{D})$ are approximately Gaussian. Given Gaussian posteriors, the BMA function coincides with its posterior mode $f_{\theta}$, i.e., FS-MAP. But since PS-MAP seeks the posterior mode of $\theta$ rather than $f_\theta,$ it learns a different function compared to FS-MAP and thus BMA. 6. **Posterior mean importance**: The posterior predictive mean function combines all possible model hypotheses weighted by their posterior likelihood, rather than a single point estimate. It often achieves better generalization and uncertainty estimates as a result [1]. 7. **On model parameterization** We emphasize that given any prior $p(\theta)$ and parameterization $f_\theta$, PS-MAP will learn different functions upon reparameterization as we show in Appendix A, whereas FS-MAP will always learn the same function. So while the precise difference between PS-MAP and FS-MAP depends on PS-MAP's parameterization, the existence of this difference is universal. Similarly, while flatness metrics such as Hessian eigenvalues are not parameterization-invariant, FS-MAP's preference for flat minima is universal when compared with PS-MAP as evident from Equation 11. As such, we believe there is no tension between these observations but would appreciate further clarifications. 8. **On the L-MAP derivation** Thank you for catching our typo. Eqn. (B.8) is indeed a **negative** evidence lower bound. Regarding motivation for considering variational inference (VI), unlike in conventional VI, the goal of the formulation described in Appendix B.9 is not primarily to learn an accurate approximation to the posterior but to obtain a non-pathological objective in a principled way. In addition, we do choose the best $q$ within the variational family defined in Appendix B.9 through learning the optimal mean parameter $\theta$. 9. **Numerical Complexity**: L-MAP requires an additional forward pass on $ S $ evaluation points. It's faster than FS-MAP and only slightly slower than PS-MAP. For a Two Moons dataset experiment with an MLP, L-MAP was 33 times faster than FS-MAP and only 1.4 times slower than PS-MAP. In scalable experiments, L-MAP takes roughly twice the time of PS-MAP. | Dataset | Method | Gradient Step (ms) | Epoch (s) | | ------- | -------- | -------- | -------- | | FashionMNIST | PS-MAP | 54 | 24 | | | L-MAP | 109 | 45 | | CIFAR-10 | PS-MAP | 64 | 27 | | | L-MAP | 126 | 52 | 11. **Calibration**: We believe L-MAP's calibration improvement on CIFAR-10 with $ p_X = $ CIFAR-100 stem from the evaluation distribution covering relevant regions of the input space without duplicating the training set. 12. **Notation**: We'll clarify unclear notations. Specifically, $ p_X = N/A $ corresponds to PS-MAP in Figure 3b and in line 135-136 the the functions are $f$ and $f+df$ where $df$ is an infinitesimal separation. [1] Bayesian deep learning and a probabilistic perspective of generalization, Wilson et al., NeurIPS 2020 --- Rebuttal Comment 1.1: Comment: Thanks for your response; it addresses most of my concerns and I am also now better informed. I see that there is much discussion about the performance of FS-MAP vs PS-MAP, but not of how to make sense of the question of "best functions" vs "best parameters" given that the setting has a prior on the *parameters*. Perhaps this is not the point of the paper, but I am still wondering. --- Reply to Comment 1.1.1: Comment: Thank you! We are glad to hear your concerns are addressed. While the details of the distinction between most likely functions and parameters does depend on the model parameterization, the existence of this distinction is generic, regardless of the parameterization. Moreover, considering a prior on parameters does not limit the scope of our work. Indeed, any prior in function space can be equivalently expressed as a prior in parameter space using the identity in Equation 6 (or its generalization we presented in Section 3.2). --- Reply to Comment 1.1.2: Comment: Thank you again for supporting acceptance of our submission! Your feedback and suggestions were very valuable and have helped us further improve our submission, and we will include the additional results and clarifications from our rebuttal in the revised manuscript. If you agree that the suggested clarifications and new results improved our submission, we would be grateful if you would consider raising your score to reflect these additions. Thank you for your time and effort!
null
null
null
null
null
null
Scale-teaching: Robust Multi-scale Training for Time Series Classification with Noisy Labels
Accept (poster)
Summary: This paper proposes a deep learning paradigm, namely scale-teaching, to cope with time series noisy labels, with designing a fine-to-coarse cross-scale fusion mechanism for learning discriminative patterns by using time series at different scales to train multiple DNNs simultaneously. Additionally, each network is trained in a cross-teaching manner by using complementary information from different scales to select small-loss samples as clean labels. Meanwhile, this paper introduces a multi-scale embedding graph learning method via label propagation to correct their labels from unselected large-loss samples. Extensive experiments demonstrate the superior performance of the proposed method. Strengths: 1. The research problem is interesting and important. 2. The idea is simple and effective. 3. The paper is well written. 4. Extensive experiments with good results. Weaknesses: 1. The meanings of some equations or symbols are not clear, like the operator || in Eq. (2). 2. The problem definition in Label Correction for Eq. (5) is difficult to understand. 3. The proposed method is relatively complex, and its procedure is not clear, including the Algorithm 1. 4. In Figure 2, it is better to introduce the flowchart of the proposed method. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: 1. In Eq. (2), how the operator || to operate the r_i^k, v_i^k and so on? 2. How to obtain the Eq. (5)? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: The proposed Scale-teaching paradigm only utilizes three scales, and its training time is increasing with the increase of the number of scales. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **W1 & Q1**: The meanings of the operator || in Eq. (2), and how the operator || to operate the r_i^k, v_i^k. **A1**: Thanks for your comment. The operator || in Equation (2) means to concatenate two vectors into a new vector. For example, vector **a** = [0,1], vector **b** = [2,3], then **a** || **b** = [0,1,2,3]. r_i^k represents the single-scale k embedding, and v_i^k is the multi-scale ([1, 2, …, k], where k is downsampling scale) fusion embedding via Eq.(2). Particularly if k = 1, we use the single-scale for classification training and let v_i_k = r_i_k. If k > 1, v_i_k = f(r_i_k || v_i_k-t||(r_i_k - v_i_k-t)||(r_i_k * v_i_k-t)), where v_i_k-t denotes the multi-scale ([1,2, …, k-t], where k-t is downsampling scale) fusion embedding. And the source code for the implementation of Eq (2) is given below: ``` f = nn.Sequential( nn.Linear(128 * 4, 256), nn.BatchNorm1d(256), nn.ReLU(inplace=True), nn.Linear(256, 128), ) ### the function of f(.) fea_concat = torch.cat((embed_x, scale_x, (embed_x - scale_x), (embed_x * scale_x)), dim=1) ### the function of operator ||. where embed_x is r_i^k, scale is v_i_k-t. v_i_k = f(fea_concat) ``` **W2 & Q2**: The problem definition in Label Correction for Eq. (5) and how to obtain the Eq. (5)? **A2**: Thanks for your comment. In Eq.(5), **Y** contains labeled and unlabeled samples, Y_ij=1 if x_i is labeled as y_i =j, otherwise Y_ij = 0. Label propagation theory utilizes the diffusion process [R1] to get the pseudo-labels of unlabeled samples via labeled samples and nearest-neighbor **Q**. Hence, we defined the sequence {F_t} to denote the diffusion process and described it as Eq.(5). When *t = 0*, *F_0* is initialized as **Y**. Further, when timestamp *t* in Eq. (5) is large, through the label information of **Y** and the graph edge weight of nearest-neighbor **Q**, the predicted pseudo-label *F_t* quality is better. Conveniently, we can know the sequence {F_t} has a closed-form solution [R1], which can be defined as Eq. (6). [R1] Learning with local and global consistency. NIPS, 2003. **W3 & W4**: The proposed method is relatively complex, and it is better to introduce the flowchart of the proposed method in Figure 2. **A3**: Thanks for your comment. The proposed Scale-teaching paradigm includes two processes: (i) clean label selection; (i) multi-scale graph embedding learning. For clean label selection, networks A, B, and C use clean labels provided by cross-teaching (A$\rightarrow$B, B$\rightarrow$C, C$\rightarrow$A) to guide each other's classification training. For multi-scale graph embedding learning, Scale-teaching first performs cross-scale fusion from fine to coarse (A$\rightarrow$B, B$\rightarrow$C). Then, we utilize pseudo labels obtained via multi-scale embeddings graph learning as corrected labels for those time series not selected as clean labelled data. In addition, we give a pseudo-code for the Scale-teaching paradigm in Figure 2: ``` Step 1: Obtain single-scale embeddings r_A, r_B, r_C r_A = encoder_1(x_A) r_B = encoder_2(x_B) r_c = encoder_3(x_C) Step 2: Obtain cross-scale embeddings v_A, v_B, v_C v_A = r_A v_B = Eq.5(r_B, v_A) v_C = Eq.5(r_C, v_B) Step 3: Obtain clean labels y_A, y_B, y_C for cross-teaching training y_A = classifier_3(v_C) ## Using small loss criterion y_B = classifier_1(v_A) ## Using small loss criterion y_C = classifier_2(v_B) ## Using small loss criterion Step 4: Obtain corrected labels yc_A, yc_B, yc_C for classification training yc_A = Eq.6(v_A, y_A) yc_B = Eq.6(v_B, y_B) yc_C = Eq.6(v_C, y_C) Step 5: Overall training Update encoder_1 and classifier_1 via cross-entropy loss(v_A, yc_A) Update encoder_2 and classifier_2 via cross-entropy loss(v_B, yc_B) Update encoder_3 and classifier_3 via cross-entropy loss(v_C, yc_C) ``` --- Rebuttal Comment 1.1: Comment: Thank you for the response of all the authors. Most of my concerns have been addressed. However, it is necessary to clearly present them in the next version. --- Reply to Comment 1.1.1: Comment: Thank you very much for your reply.
Summary: This paper a paradigm to improve the small-loss criterion for time series noisy labels, which mainly addresses the problem that external noises easily distort time series’ discriminative patterns. The motivation is clear, and experiments have shown good results. However, the reviewer is still concerned about some issues. Strengths: 1. DNNs with different random initializations have high consistency in classification results for clean labeled samples in the early training period; while there is disagreement in the classification of noisy labeled samples, the designation of cross-scale fusion using inputs with different scales is lighter than other methods using different networks like co-teaching which are time-consuming. Weaknesses: 1. Although the proposed method is useful, the network is similar to multi-scale feature learning like FPN and the label propagation part is also not novel. 2. The way of downsampling methods needs more comparison. 3. Momentum updating is also useful for other methods, although the goal of using it in this paper is reasonable, and scale-teaching achieves SOTA even w.o. momentum updating. But it is still an unfair comparison. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: 1. What is the fundamental difference between the proposed method with FPN and traditional label propagation is graph learning? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 4 excellent Contribution: 2 fair Limitations: The authors seem to discuss some limitations of the method in terms of computation efficiency. The reviewer suggests more discussion in energy consumption for a more general usage. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **W1 & Q1**: The network is similar to multi-scale feature learning like FPN and the label propagation part is also not novel. Also, what is the fundamental difference between the proposed method with FPN and traditional label propagation in graph learning? **A1**: Thanks for your comment. Based on the keyword *FPN*, we searched for the article [R1], which is expected to be the FPN. First, we summarize the differences between Scale-teaching and FPN as follows: ``` (1) FPN and Scale-teaching solve different problems. FPN is used for image object detection, while Scale-teaching is used for time series classification with noisy labels. In addition, time series data is a collection of data points sorted in chronological order, which is different from image data. In general, it is hard to use the image model directly for time series. (2) FPN takes a single-scale image as input and uses a top-down pyramid network to learn the multi-scale embeddings of the image. Scale-teaching takes different scale sequences of the time series as input and uses the cross-scale fusion mechanism to learn the multi-scale embeddings of the time series. (3) For different scale embeddings, FPN uses shared classifiers/regressors for object detection. In contrast, Scale-teaching employs multiple classifiers with different initializations for classification training, thus exploiting the divergence information of the DNNs on the noisy labels to obtain more robust multi-scale embeddings. (4) FPN uses one randomly initialized DNN to learn multi-scale embeddings of an image. Scale-teaching uses several different randomly initialized DNNs to learn multi-scale embeddings of a time series, and uses selected clean labels for cross-teaching classification training. ``` For the differences between Scale-teaching and Traditional Label Propagation (TLP) in graph learning, we summarize as follows: ``` (1) TLP is generally used for semi-supervised learning. Scale-teaching solves the time-series noise label learning problem with label propagation. (2) TLP generally uses single-scale data for graph construction. Scale-teaching utilizes learned multi-scale embeddings for graph learning. (3) TLP requires explicitly correctly labeled data for modeling. Scale-teaching utilizes selected small-loss data as correctly labeled data for modeling. ``` [R1] Feature Pyramid Networks for Object Detection. CVPR, 2017. **W2**: The way of downsampling methods needs more comparison. **A2**: Thanks for your comment. In our supplementary material, we have provided the comparison analysis of downsampling methods in Section E. Firstly, we analyze the impact of the downsampling scale sequence list. From Figure 2 in Section E, we find that Scale-teaching has the highest classification accuracy using four different scales for training, indicating that more input scales do not necessarily improve the classification performance. In addition, using three or four scales of sequences can effectively improve the classification performance of Scale-teaching compared with using two different scales. Then, we analyze the impact of input scales of sequence order. As shown in Figure 3 in Section E, We can find that the classification performance of finer-to-coarser is better overall, which is due to its ability to use a single fine-scale sequence with an excellent classification performance from the beginning to promote the classification performance of multi-scale fusion embeddings gradually. **W3**: Momentum updating is also useful for other methods. **A3**: Thanks for your comment. Existing noise-label learning methods rarely consider utilizing robust feature representations to cope with noisy labels. Specifically, existing noise-label learning methods mainly design robust learning paradigms at the loss level, while Scale-teaching focuses on handling noise labels with robust multi-scale feature representations. Therefore, Scale-teaching designs the cross-scale mechanism and combines it with momentum updating to learn robust multi-scale feature representations.
Summary: The paper proposes a time series classification algorithm that works across multiple scales with a noise-robust loss function. The authors claim improvements due to the multi-scale network architecture and the objective function capable to handle label noise on a variety of datasets in UCR128 and beyond. Strengths: The paper is mostly well organized. Statistical tests were performed in all experiments, and the paper has a good mix of summary statistics for large evaluation runs (like in Table 1), and more qualitative results showing properties of the algorithm (e.g. the training dynamics in Figure 5). The authors also perform ablation studies and attempt to test the effect of different components of their algorithm. Weaknesses: - presentation can be generally improved. In section 3, it is not fully clear which components are preliminaries and present in existing work, and which are the new algorithmic components of the new approach. According to title and abstract, these should be the multi-scale and label noise corrections parts of the model. Maybe the headings could be further adapted to make this clear (e.g. right now, it is not clear why "cross-scale fusion" mostly addresses the multi-scale aspect, while "multi-scale embedding graph learning" mostly covers the label noise robustness part). - The figure caption for Figure 2 could be a bit longer and actually explain the figure content. **Minor**: - typo: emebeddings (l. 56) - the definition of the considered noise model comes too late and should be directly mentioned in Section 3. - Table 1 has a lot of numbers. Given that you ran a statistical test, consider to e.g. gray out the non-signficant results for easier readability. - sec 4.5 has a few typos (missing spaces, etc) Technical Quality: 3 good Clarity: 3 good Questions for Authors: - The work seems large incremental, given that the method (3.4) is composed of techniques used in [44], [40], and label propagation. Could you highlight the main conceptual novelty of your algorithm compared to previous approaches? - What is the test performed in Table 1? I would suggest to include this directly into the caption. - l. 327 in the conclusion claims "... can utilize the multi-scale properties of time series to effectively handle noisy labels". Where is this particular point validated, in your opinion? - How are hyperparameters selected? Can this be made more explicit in the main paper? - What are additional limitations of the assumed noise model and correction mechanism? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: n/a Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Q1**: The work seems large incremental, given that the method (3.4) is composed of techniques used in [40], [44], and label propagation. Could you highlight the main conceptual novelty of your algorithm compared to previous approaches? **A1**: Thanks for your question. Literature [40] uses original sample features for graph construction, thus for semi-supervised learning. Literature [44] designs an unsupervised reconstruction loss for semi-supervised learning using a momentum update strategy. In contrast to [40], we construct the graph using learned multi-scale embeddings. Unlike [44], we perform graph construction using momentum-updated multi-scale embeddings to exploit robust multi-scale embeddings for noisy label correction. Compared to existing noisy label learning methods, our conceptual innovation is to exploit the complementary information between sequences of different scales and design a cross-scale fusion mechanism for learning more robust multi-scale embeddings. Furthermore, we use multi-scale embeddings to introduce graph learning based on label propagation for noisy label correction. [40] Label propagation through linear neighborhoods. ICML, 2006. [44] Temporal ensembling for semi-supervised learning. arXiv, 2016. **Q2**: What is the test performed in Table 1? I would suggest to include this directly into the caption. **A2**: Thanks for your question and suggestion. The results in Table 1 show the test classification accuracy statistics of different baselines on four individual large datasets, the UCR 128 archive and the UEA 30 archive. The Avg. Rank indicates the average ranking of the corresponding baseline's test classification accuracy among all baselines for each benchmark. In the next version, we will update the description in the caption of Table 1. **Q3**: l. 327 in the conclusion claims "... can utilize the multi-scale properties of time series to effectively handle noisy labels". Where is this particular point validated, in your opinion? **A3**: Thanks for your question. In Section 4.3, we analyze the cross-scale fusion mechanism for Scale-teaching. Figure 3 in Section 4.3 illustrates that the classification results of different scale sequences exhibit evident complementary information. Specifically, Figure 3 (c) demonstrates that Scale-teaching (larger shaded circle area) has significantly more samples with correct category predictions for the classifiers corresponding to the coarse and fine scales on the UCR 128 archive than w/o cross-scale fusion (smaller shaded circle area). Additionally, Scale-teaching on the coarse $\rightarrow$ fine scale also shows a significantly larger cross-circle area (525) compared to w/o cross-scale fusion (207). These results indicate that Scale-teaching can improve the model's classification performance by effectively utilizing the complementary information between different scales in the presence of noisy labels. Furthermore, the t-SNE visualization in Figure 4 reveals that the multi-scale embeddings learned by Scale-teaching are more discriminative across categories. Hence, we claim that Scale-teaching can effectively leverage the multi-scale properties of time series to handle noisy labels. **Q4**: How are hyperparameters selected? Can this be made more explicit in the main paper? **A4**: Thanks for your question. Our experiment contains 162 datasets. It would be time-consuming to perform hyperparameter selection for each dataset. Therefore, the hyperparameters of Scale-teaching are not carefully tuned for each dataset, and most of the hyperparameters are set based on the default hyperparameters of related works. The learning rate and epoch are set based on the parameters of existing noise-label learning methods, such as FINE and CULCU. $\alpha$ in Eq. 3, $\sigma$ in Eq. 4 and $\beta$ in Eq. 5 are set based on the default hyperparameters of related label propagation works. $e_{warm}$ is based on FINE settings. $e_{update}$, $\gamma$ and batch size are based on manual empirical settings without specific hyperparameter analysis. The largest neighbor $K$ is set based on human experience, and we had a simple test on several datasets, and found that a larger value of $K$ does not improve the classification performance, but instead increases the running time of the model. In our next release, we will explain in detail the basis for choosing each hyperparameter. **Q5**: What are additional limitations of the assumed noise model and correction mechanism? **A5**: Thanks for your question. It may be difficult for our proposed method to achieve good classification performance with noise ratios greater than 50%. In particular, the literature [R3] suggests that the small-loss criterion selects clean labeled samples that still contain many noisy labels when the noise ratio is greater than 50%. Therefore, existing work on label-noise learning based on the small-loss criterion (e.g., Co-teaching, CULCU) has experimentally set label noise ratios of less than 50%. [R3] Towards understanding deep learning from noisy labels with small-loss criterion. IJCAI, 2021. **W1**: presentation can be generally improved. **A6**: Thanks for your comment. We think your suggestions are excellent. In our next release, we will update the title and organization of Section 3 based on your suggestions. **W2**: The figure caption for Figure 2 could be a bit longer and actually explain the figure content. **A7**: Thanks for your comment. We will update the caption of Figure 2 in the next version. **W3**: Minor: typo: emebeddings (l. 56), etc. **A8**: Thanks for your comment. We will fix the above issues in the next version.
Summary: This paper presents a "scale-teaching" framework aimed at addressing label noise in time series classification tasks. The proposed approach involves multiple encoders to extract multi-scale time series features. These features are then concatenated for the final loss calculation, enabling the selection of clean samples. The identified clean and noisy samples are further processed using a label propagation algorithm, incorporating graph learning techniques to correct the noisy labels. The entire dataset is subsequently trained using cross-entropy as the loss function. Experimental evaluations are conducted on the UCR 128 archive and UEA 30 archive datasets, both containing synthetic label noise. Strengths: 1. While the problem of learning with noisy labels (LNL) has been extensively studied in the domain of image classification, the exploration of label noise in time series classification remains relatively limited in current literature. This work makes significant contributions by conducting extensive experiments to investigate how the LNL experience gained in image classification can be effectively generalized to time series classification. 2. The utilization of multi-scale operations is particularly noteworthy in the context of time series classification with label noise, considering the inherent differences in modality between images and time series data. 3. The paper demonstrates a clear and coherent writing style, ensuring ease of comprehension for readers. Additionally, the release of the accompanying code promotes reproducibility, while the appendix provides comprehensive details on the conducted experiments. Weaknesses: 1. While I appreciate the contribution made by generalizing learning with noisy labels (LNL) from image classification to time series classification, the level of technical novelty in this work appears somewhat limited. Specifically, the utilization of a multi-scale framework has already been explored in time series forecasting tasks, and the application of a semi-supervised approach to enhance performance is not novel in the context of general LNL tasks [R1]. 2. The paper lacks sufficient ablation studies to thoroughly examine the individual components of the proposed method. For instance, in order to demonstrate the efficacy of the multi-scale framework in sample selection, it would be valuable for the authors to conduct controlled experiments using only one scale. While accuracy is reported in Table 3, it is not necessarily the most appropriate criterion for evaluating sample selection. Therefore, I suggest that the authors also report selection quality metrics such as precision, recall, and F1 score. Additionally, it would be worthwhile to explore alternative label correction techniques, such as pseudo-labeling or solely utilizing clean samples for training, to ascertain their competitiveness in comparison to the label propagation method employed in the paper. 3. It would be insightful to evaluate the performance of the "scale-teaching" framework on datasets that do not contain label noise. I noticed that such experiments were conducted in [R2], and I encourage the authors to perform similar settings to provide a comprehensive analysis in their work. [R1] DivideMix: Learning with Noisy Labels as Semi-supervised Learning [R2 ]Estimating the Electrical Power Output of Industrial Devices with End-to-End Time-Series Classification in the Presence of Label Noise Technical Quality: 2 fair Clarity: 3 good Questions for Authors: See **weaknesses** Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: This work extensively addresses its limitations, and based on the information provided, it appears that the paper does not pose potential negative societal impacts. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **W1**: The utilization of a multi-scale framework has already been explored in time series forecasting tasks, and the application of a semi-supervised approach to enhance performance is not novel in the context of general LNL tasks [R1]. **A1**: Thanks for your comment. In Section 2, we have discussed existing work related to multi-scale time series modeling. In particular, the multi-scale properties of time series have been widely used for various time series downstream tasks, such as time series prediction, classification, and anomaly detection tasks. The above studies show that exploiting the multi-scale properties of time series can improve the performance of downstream tasks. Nevertheless, the above studies do not consider how to utilize the multi-scale properties of time series for label-noise learning. Unlike the above work, we employ multiple DNNs with the same structure to learn discriminative patterns of the time series at different scales separately, and design a cross-scale fusion strategy to obtain robust embeddings for handling noisy labels. DivideMix is a classical work for image label-noise learning that combines MixMatch semi-supervised learning method for noisy label classification. In contrast, our proposed Scale-teaching paradigm utilizes learned multi-scale embeddings to cope with time series noisy labels. In other words, Scale-teaching focuses on utilizing robust feature representations for label-noise learning rather than a semi-supervised learning paradigm for label-noise learning. In particular, Scale-teaching utilizes its learned multi-scale embeddings to select more reliable clean labels. Further, Scale-teaching uses the selected clean labels in conjunction with multi-scale graph learning to achieve noisy label correction. Hence, the focus of Scale-teaching exploits the multi-scale nature of time series for label-noise learning, and does not rely exclusively on the semi-supervised learning paradigm for noisy label classification. [R1] DivideMix: Learning with Noisy Labels as Semi-supervised Learning **W2**: (i) It would be valuable for the authors to conduct controlled experiments using only one scale. (ii) I suggest that the authors also report selection quality metrics such as precision, recall, and F1 score. (iii) Additionally, it would be worthwhile to explore alternative label correction techniques, such as pseudo-labeling or solely utilizing clean samples for training. **A2**: Thanks for your comment. (i) From Table 3 in Section 4.5, we have conducted controlled experiments using only one scale. For details, please refer to the *only single scale* method in Table 3. (ii) That's a good suggestion. Following [R2], we select averaged F1-score on the test set as a new metric for ablation analysis. Like Table 3 in Section 4.5, we give the corresponding test classification F1-score (%) as follows: |Dataset | HAR | | UniMiB-SHAR | | |:---:|:---:|:---:|:---:|:---:| | Method | Sym 50% | Asym 40% | Sym 50% | Asym 40% | | Scale-teaching | **90.05** | **89.14** | **77.56** | **65.89** | | w/o cross-scale fusion | 88.16 (-1.89) | 87.05 (-2.09) | 68.23 (-9.33) | 57.76 (-8.13) | | only single scale | 87.56 (-2.49) | 86.75 (-2.39) | 66.87 (-10.69) | 54.12 (-11.77) | | w/o graph learning | 87.79 (-2.26) | 87.41 (-1.73) | 74.62 (-2.94) | 63.15 (-2.74) | | w/o moment | 89.34 (-0.71) | 88.27 (-0.87) | 76.67 (-0.89) | 64.92 (-0.97) | | w/o dynamic threshold | 88.93 (-1.12) | 88.29 (-0.85) | 73.11 (-4.45) | 64.76 (-1.17) | (iii) In Section 3.4, we claim that $\mathcal{F}$ in Eq. (7) is the estimated pseudo-labels which inevitably contain some incorrect labels. To address the above issue, we utilize a dynamic threshold pseudo-labeling strategy to promote label correction performance. Further, we add an ablation study method (w/o dynamic threshold) in Table 3. The result shows that the dynamic threshold pseudo-labelling strategy can improve classification performance. For solely utilizing clean samples for training, please refer to the ablation study method named *w/o graph learning* in Table 3. [R2] Estimating the Electrical Power Output of Industrial Devices with End-to-End Time-Series Classification in the Presence of Label Noise **W3**: It would be insightful to evaluate the performance of the "scale-teaching" framework on datasets that do not contain label noise. **A3**: Thanks for your comment. We select the four individual large datasets without noisy labels for evaluation. The detailed test classification accuracy (Acc, %) and F1-score (F1, %) are shown as follows: | | Standard | | Mixup | | Co-teaching | | FINE | | SREA | | SELC | | CULCU | | Scale-teaching | | |:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:| | Dataset | Acc | F1 | Acc | F1 | Acc | F1 | Acc | F1 | Acc | F1 | Acc | F1 | Acc | F1 | Acc | F1 | | HAR | 93.29 | 93.27 | **95.42** | **95.39** | 93.77 | 93.75 | 93.13 | 93.19 | 93.02 | 92.91 | 93.76 | 93.71 | 94.75 | 94.72 | 94.72 | 94.18 | | UniMiB-SHAR | 89.14 | 86.37 | _84.84_ | _80.17_ | 88.24 | 84.43 | 88.14 | 84.03 | _65.51_ | _66.54_ | 89.28 | 89.19 | 89.46 | 86.45 | **93.61** | **93.62** | | FD-A | 99.93 | 99.93 | 99.91 | 99.91 | **99.96** | **99.96** | _68.22_ | _64.05_ | _90.25_ | _90.14_ | 99.82 | 99.82 | 99.95 | 99.95 | **99.96** | **99.96** | | Sleep-EDF | 84.93 | 81.99 | 84.67 | 82.11 | 85.37 | 82.52 | 84.62 | 83.07 | _79.42_ | _77.67_ | 84.82 | 82.17 | **85.54** | 83.26 | 85.34 | **84.76** | As shown in the above table, Scale-teaching's classification performance is still better than most baselines. In addition, SREA employs an unsupervised time series reconstruction loss as an auxiliary task, which reduces the model's classification performance without noisy labels. --- Rebuttal Comment 1.1: Title: Thanks for the reply Comment: I appreciate the comprehensive explanations and meticulously designed experiments provided. These experiments further bolster the validity of the proposed method, and I strongly encourage the authors to incorporate these new experiments into the next version. However, my initial concern remains unresolved. While I understand the explanations presented by the authors in the Rebuttal, I still find the method lacking in significant novelty. Beyond the aspect of multi-scale feature modeling, the other techniques appear to be direct extensions from the existing LNL/semi-supervised practices. I don't intend to disregard the potential utility of these techniques, but I'm eager to see a more profound analysis comparing the conventional noisy label setting in image classification with the noisy label setting in time series classification. In the current submission, these analyses seem insufficiently comprehensive or deeply explored. Overall, I consider this paper to be on the borderline. However, I have raised my score to 5 as my other concerns have been effectively addressed by the authors. --- Reply to Comment 1.1.1: Comment: Thank you very much for your reply and for acknowledging our work. In our next version, we plan to include an exposition on the distinctions between time series noise labeling and conventional noisy label setting in image classification.
null
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
On the estimation of persistence intensity functions and linear representations of persistence diagrams
Reject
Summary: This paper considers the problem of estimating _persistence intensity (resp. density) functions_, which are topological summaries arising when considering multiples realizations $\mu_1,\dots,\mu_n$ of persistence diagrams---which are counting measures supported on an open-half-plane $\Omega$. Namely, (Chazal and Divol, 2019) proved that $\frac{1}{n} \sum_{i=1}^n \mu_i$ converges toward a limit object $E[\mu]$ called an _expected persistence diagram_ which, under mild assumptions, admits a density $p$ wrt the Lebesgue measure on $\Omega$, called a persistence _intensity_ function. Note that $p$ may not integrate to $1$ (the $\mu_i$s are *not* probability measures, only Radon measures). It is natural to wonder how fast one can estimate $E[\mu]$ given an i.i.d. sample $\mu_1,\dots,\mu_n$. Given that $E[\mu]$ has a smooth density $p$, while the $(\mu_i)$ are discrete, it is tempting to adopt a kernel based approach (i.e. considering the convolution of the $\mu_i$ by a kernel $K_h$, with $h>0$ being the bandwidth), yielding an estimator $\hat{p} = \hat{p}_{h,n}$ of $p$. This is the purpose of the current paper which shows that, in brief, assuming $p$ is $s$-Hölder, $$\| E[\hat{p}] - p \|_\infty \leq O(h^s), \qquad \text{(bias)},$$ and (with high probability) $$ \| \hat{p} - E[\hat{p}] \|_{\infty , h} \leq O(n^{-\frac{1}{2}}h^{-1}), \qquad \text{(variance)},$$ where $\| \cdot \|_{\infty,h}$ denote the sup-norm but only accounting for points in $\Omega$ that are at distance $> 2h$ from the diagonal $\partial \Omega = \{ (t,t), t \in \mathbb{R} \}$. As an alternative, they also study the persistence _density_ function, which is substantially similar to the above, but considering the quantity $\frac{1}{n} \sum_{i=1}^n \frac{\mu_i}{\mu_i(\Omega)}$ as a starting point, hence the limit object is a _probability_ distribution. This first normalisation step allows the authors to obtain similar results as those mentioned above, but stronger in the sense that they do not need to ignore points close to $\partial \Omega$. Strengths: ## Clarity The paper is fairly well written, though it may be hard to understand for the reader that is not familiar with statistical topological data analysis problems. Theorems are precisely stated, and, with few exceptions, mathematical quantities are well-defined. ## Originality and Significance The authors provides quantitative results for the convergence of kernel-based density estimator for expected PD, which is new to the best of my knowledge. The introduction of the persistence _density_ may also be worth of interest, if further motivated (see below). ## Quality This is a competent paper in terms of the results provided (which seem correct as far as I can tell). However, the motivation behind the type of results themselves is arguably questionable. Weaknesses: There are several points which fail to convince me and prevent me from supporting this paper yet. ## 1. The use of the sup-norm. The paper provides results in the sup-norm on $\Omega$. It is important to stress that this norm does not account for the peculiar role played by the diagonal in the geometry of persistence diagrams, in contrast with the standard $\mathrm{OT}_p$ metric. The authors justify this by an inequality of the form $\mathrm{OT}_p^p(\cdot, \cdot) \leq C | \cdot - \cdot|$, meaning that being small in sup-norm is (strictly, as proved by the authors) more demanding that being close in $\mathrm{OT}_p$ metric. [Edit: fix the rendering by removing the infty symbol; the norm in the rhs is the sup-norm.] Sure, it means that the rate obtained for the sup-norm implies the same rate for the $\mathrm{OT}_p$ metric (up to the role played by the exponent), but it also means that the task is _harder_, and that this norm does not induce the same topology as the natural metrics on persistence diagrams. Recall (as suggested in $\ell$167-171) that accounting for the diagonal is not simply a trick to compare measures with different total masses, but also has an algebraic meaning (from which we get the mentioned stability results). The sup-norm induces a topology that fails to capture the fact that one can compare diagrams "downweighting points close to the diagonal". To me, this is what prevents, for the persistence _intensity_ function, to obtained "global" sup bound, and get only bounds valid $2h$-away from the diagonal. The sup-norm cannot handle the noise properly. In addition, because performing estimation in sup-norm is _harder_ than estimation in $\mathrm{OT}_p$-metric, one only get a convergence rate of $\frac{s}{2(s+1)}$, which is natural for the sup-norm, but quite _slow_ for the $\mathrm{OT}_p$ metric. Indeed, (DIvol and Lacombe, 2021) prove that the empirical expected persistence diagram (i.e. without any convolution by a kernel involved) converges toward the persistence _intensity_ function at (the faster) parametric rate $O(n^{-\frac{1}{2}})$. Of course, the latter result considers the weaker (but more natural) metric $\mathrm{OT}_p$, but as long as there is no very strong motivation to compare persistence intensity functions using a sup-norm, it is reasonable to wonder why should we struggle to obtain slower convergence rates. ## 2. Motivations behind the persistence _density_ function As (interestingly) observed by the authors, statistical estimation improves when considering the normalized persistence _density_ function. However, here as well, I fail to be fully convinced by the proposed motivation, namely "the normalized persistence measure may be desirable when the number of points (...) is not of direct interest but their spatial distribution is" ($\ell$80-81). I do not agree with this claim because this normalization typically discard points away from the diagonal, asymptotically. For, consider a $N$-sample on a sphere + some tubular noise, and the Vietoris-Rips filtration on it. Then (with high probability), the corresponding (random) persistence diagram in $H_2$ (degree-$2$ homology) will have one point away from the diagonal, and a bunch of points close to the diagonal accounting for the noise---so does the corresponding persistence _intensity_ function. As $N$ increases, the points accounting for the noise get closer and closer to the diagonal, but also more abundant. As such, if one normalize the persistence measure by its mass/number of points, the bump/point accounting for the "robust" topological information will asymptotically be erased. In particular, this normalized persistence measure is not continuous (for, say, the vague topology) with respect to the Hausdorff distance, a central property satisfied by the Vietoris-Rips filtration. (Note : this is what we can observe in Figure 3 vs Figure 2). Therefore, (i) it is not surprising that one obtains stronger (this time) results with this weaker representation and (ii) it is not clear to me why would one actually consider this representation at is losses its topological interpretation, as far as I can tell. ## 3. About the experiments The numerical illustrations have all been deferred to the supplementary material. To me, the (main body of the) paper should be self-contained, in sense that one should not _have to_ look at the supplementary material to understand it at high level. Proofs, complementary results, _complementary_ experimental report can be deferred to the supplementary material, but having a **Numerical illustration** section without any numerical illustration, mostly saying "look at the supplementary material", is not serving the paper. Right now, the paper can be considered as experiment-less, and while numerical illustrations are not mandatory, this clearly does not support the paper. Note that I looked at the experiments nonetheless. While they are fairly interesting, they do not bring more motivation to support the paper (with, e.g., a ML experiment where using the persistence _density_ function is much better than using the more standard persistence intensity function). ## 4. Complementary minor comments - I think that there is a typo in the definition of $\Omega_\ell$, which is (I think) inconsistent with the description made below ($\ell$67) and its use in section 3. - More references could have been cited through the paper, e.g., when listing different linear representations ($\ell$129-131), it may be nice to credit their respective authors. Similarly, a more precise comparison with related work (mostly Divol's line of work with Polonik, Chazal and then Lacombe), would be helpful to understand what is the paper contribution and how it differs from these works. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: My main interrogation is about the motivation of the proposed objects (sup-norm, normalisation wrt the mass), as detailed in points 1. and 2. above. The motivation can come from theoretical considerations, but also from numerical ones; e.g. an experiment showing that considering the sup-norm is more informative that the $\mathrm{OT}_p$ metric (I mean, it theoretically is---as discussed in section D1, but I do not see a _practical_ situation where it is important to observe that the sup-norm diverges while the $\mathrm{OT}_p$ converges). Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 2 fair Presentation: 2 fair Contribution: 1 poor Limitations: I do not see any negative societal impact _specific_ to this work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Motivation and rationale for the intensity function.** We thank the reviewer for the constructive and expertly crafted comments. We agree with all of them! We regard the issues raised by the reviewer as features and not drawbacks of our approach. Please see our general comments for a clarification about the goals of our work and their relationship with current TDA methods. **The use of the sup-norm.** 1. An upper bound in $\ell_{\infty}$ error does indeed deliver a strong statistical guarantee that immediately applies to arbitrary (bounded) linear functionals. It is indeed stronger than the one afforded by the optimal transport distance, which, as the reviewer mentions, down-weights the points near the diagonal, likely to represent topological noise. Since our aim is to instead capture topological noise and its distribution, we regard this as a feature, not a shortcoming. 2. If we still wish to down-weight the points near the diagonal and emphasize the topological signature, our framework can be easily adapted, by considering a weighted intensity function (with weights proportional to any power of the distance to the diagonal). We explore this extension in Section B.4 in the supplementary material, where we derive statistical guarantees for estimating the persistence surface. We will comment on this extension in the main body of the text. 3. We also would like to emphasize that our approach is computationally appealing. Both the intensity and density functions are straightforward to compute. Furthermore, comparing different intensity/density functions is also straightforward. In contrast, computing and evaluating OT distances (even with perfect knowledge of the distributions) is computationally challenging, in general. 4. The reviewer is correct that with a bias-variance trade-off, the rage of convergence in $\ell_{\infty}$ norm is slower, of order $O\left(n^{-\frac{s}{2(s+1)}}\right)$, if we let the bandwidth vanish with the sample size. (This is an unavoidable, ubiquitous fact in non-parametric functional estimation). However, when the bandwidth $h$ is kept fixed, as is common in linear representations of the expected persistence measure like the persistence surface [DP19], the convergence rate is of order $O(n^{-\frac{1}{2}})$, as is shown in Section B.4 in the supplementary material. We will add more commentary to the paper in order and emphasize this fact. 5. Setting aside any issue or relevance and motivation, we would like to remark that our results are novel and original. For instance, the result in Thm 3.1 that the optimal transport distance is bounded above by the $\ell_{\infty}$ distance between the intensity functions is a new contribution, and so is the observation (in section D.1) that, when $q=\infty$ the topology induced by the $\ell_{\infty}$ distance is strictly stronger. **Motivations behind the persistence density function.** The reviewer is correct that asymptotically, the points away from the diagonal would vanish in the persistence density function. With the caveat that this type of asymptotic behavior (whereby the number of points used to compute the persistence diagram increases without bounds) is *outside* the framework of our results, we still have that the primary distributional features of the topological noise, represented by points close to the diagonal in the persistent diagrams, are well-preserved. Since this a main focus of this work, we find that the persistence intensity function, despite the loss of information due to the normalization, remains a valid and potentially useful tool, which in addition enjoys better convergence guarantees. We agree with the reviewer that points in $\Omega$ that express topological features may not even have a positive persistence density, but, once again, we do not regard this necessarily as a limitation, but rather a feature. See the example above about the difference in the distributions of the topological noise for persistence diagrams built from a uniform and non-uniform distribution over the sphere, as illustrated in Figure 1. We also agree that the normalized persistence measure is not continuous for the vague topology w.r.t Hausdorff distance (a fact that is not surprising, given our other results). We believe that a refined analysis of the topological properties of the normalized persistence measure in order would be an interesting (and probably subtle) problem to explore. We will provide better and clearer language to motivate the use, utility and limitations of the density function. **About the experiments.** Though, as pointed out by the reviewer, our paper is primarily theoretical, we believe that the experiments do offer some insights on the properties of the intensity and density functions. We are not sure how to design experiments that demonstrate the utility of using persistence density/intensity functions over more traditional tools because we do not see them as competing and mutually exclusive - they are just different. We are happy to include the simulations depicted in Figure 1, which show two sample persistence diagrams from a uniform and non-uniform distribution over the sphere, along with the associated persistence intensity and density functions. These plots illustrate two settings in which the topological signals are nearly identical but the distribution of the topological noise is markedly different. **Complementary minor comments** 1. The reviewer is correct that there is a typo with regard to the definition of $\Omega_{\ell}$. The correct definition should be \begin{equation} \Omega_{\ell} = \\{\mathbf{x} \in \Omega: \min_{\mathbf{\omega} \in \partial \Omega} \left\Vert\mathbf{x} - \mathbf{\omega}\right\Vert_{2} \geq \ell \\}. \end{equation} 2. We will include more references and details about how our contribution differs from existing ones. --- Rebuttal Comment 1.1: Title: Thanks Comment: Thank you for taking time addressing my comments. To be honest, I am not truly convinced by the _it's a feature not a bug_ approach, but I may consider engaging discussion with other reviewers to see how do they feel about this. --- Reply to Comment 1.1.1: Comment: Thank you for keeping an open mind. We do not intend to oversell our results: what we mean by "features" is just the properties of the intensity function and of our method (the good, the bad and the ugly). Our main point is that this perspective (which is not just ours, as it has been considered by others) is different than the prevailing TDA paradigm and worth investigating, not only mathematically (as it was done by [CD19]) but also statistically.
Summary: The paper proves several theoretical inequalities involving the optimal transport distance, intensity, and density functions on a plane triangle with a non-standard boundary motivated by persistent homology. Strengths: The paper rigorously proves in appendix C six theorems and three corollaries from section 3. All definitions, statements, and proofs are written in great detail. Also, the paper is well-written overall. Weaknesses: Starting already from section 2 about the background, it seems that the term "persistence" is not really needed because there is no connection with real data. Borel sets, measures, densities, and other concepts of classical probability theory can be considered on a plane triangle without the "persistent" adjective. Hence it is strange to read lines 97-99 saying that "measure and probability are not yet standard concepts in the practice and theory of TDA. As a result, they have not been thoroughly investigated" because measure and probability are standard concepts in probability theory for nearly 100 years. The conclusions reveal the main theoretical weaknesses in lines 328 and 332: "Our main focus is on the estimation of the persistence intensity function [CD19, CWRW15]." More explicitly, line 106-109 accepts that "[CD19] provided explicit expressions for p and p˜ ... We will refer to the functions p and p˜ as the persistence intensity and the persistence density functions, respectively. We remark that the notion of a persistence intensity function was originally put forward by [CWRW15]." Technical Quality: 3 good Clarity: 2 fair Questions for Authors: Taking into account lines 106-109, what is the theoretical advance in the paper over the past work [CD19, CWRW15]? Are Assumptions 3.2, 3.3, and 3.4 essential for the proven results? Do you have counter-examples to the theorems and corollaries when one of these assumptions fails? Even if we accept Assumptions 3.2, 3.3, and 3.4, can the word "persistence" be removed from sections 2-3 so that all results are proved for measures on any plane triangle? The words "Betti numbers" can be easily defined for any triangle bounded by the diagonal x=y. Then will the paper become much more suitable for a more theoretical venue in statistics? Do Assumptions 3.2, 3.3, and 3.4 hold for persistence diagrams obtained from the experiments in section 4? The main practical weakness is the lack of a problem statement for the data mentioned in section 4. Is this data real or simulated? Some pictures would be helpful. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: Though the paper doesn't include the required keyword "limitation", the limitations appear in Assumptions 3.2, 3.3, 3.4. For instance, Assumption 3.4 essentially requires that there is not too much "little noise". In a simple case of the sublevel persistence of a scalar function, this function can be perturbed only by introducing a "bounded amount" of pairs of adjacent local maxima and minima. More exactly, the persistence diagram allows only a bounded sum of "noisy artefacts" near the diagonal. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Comparison with previous literature.** The notion of persistence intensity function is neither due to us nor new, and has been considered and used before: it has been suggested by [CWRW15], used in practical applications, e.g. by [WNv+ 21], and recently formalized and studied in great detail by [CD19] in the TDA literature. Indeed our results complement the contributions of [CD19], who posed the problem of statistical estimation of persistence intensity functions but did not tackle it (so our results leverage theirs, but are different n. In this regard, our results fit well within a recent line of work in the TDA/statistical literature and offer new results. We will provide more detailed commentary and comparisons with existing papers. **Probability theory and the use of the word persistence.** Our analyses rely critically on the fact that the points in the persistence diagram belong to a plane triangle and that the total persistence is bounded. Furthermore, our results are specifically tailored to TDA settings, so we prefer to keep using the word ``persistence" to make this explicit, even if in principle our analysis could be made more general. We will rephrase the sentence in lines 97-99. We did not mean to imply that measure theory and probability are not standard concepts in TDA! **Assumptions.** While we agree with the reviewer that it would be interesting to determine how critical Assumptions 3.2, 3.3 and 3.4 are, we will abstain from pursuing this line of research for the following reasons: (i) those assumptions are standard and widely used in the TDA and non-parametric statistical literature; and (ii) weakening these assumptions may be feasible but it will produce significant technical challenges, and will likely lead to new and weaker assumptions that are also much more technical. **Experiments.** The original ORBIT5K dataset consists of simulations. We will include additional figures in the appendix to illustrate the data. --- Rebuttal Comment 1.1: Title: remaining questions Comment: Thank you for the reply. >our results complement the contributions of [CD19], who posed the problem of statistical estimation of persistence intensity functions Could the authors please specify the exact place (problem number or at least a page) in [CD19], where this problem was posed? The following two questions seem unanswered. Taking into account lines 106-109, what is the theoretical advance in the paper over the past work [CD19, CWRW15]? Do Assumptions 3.2, 3.3, and 3.4 hold for persistence diagrams obtained from the experiments in section 4? Could the author comment on their results in the context of the work by Bobrowski et al (https://www.nature.com/articles/s41598-023-37842-2, available at https://arxiv.org/abs/2207.03926 since July 2022) claiming "a surprising discovery: normalized properly, persistence diagrams arising from random point-clouds obey a universal probability law" (quoted from the abstract)? --- Reply to Comment 1.1.1: Comment: Apologies for the terseness in our response. In [CD19] the authors formalize in rigorous mathematical ways the notion of the persistence intensity function, showing that, under mild conditions, this function is not only well-defined but in fact corresponds to the Radon-Nykodin derivative of the expected persistence measure w.r.t. Lebesgue measure. Their core contributions are mathematical, but in (the brief) Section 8, they suggest using a kernel-density-based method to estimate the persistence intensity function and, using results from non-parametric statistics, prove that a data-driven method for selecting the bandwidth is asymptotically optimal (with respect to the mean square error loss). The authors focus on the mean integrated squared error for the problem of bandwidth selection, and suggest the consistency for the mean squared error but do not investigate into detailed conditions and proofs. In contrast, in our paper, we conduct a detailed finite sample statistical analysis of the same estimator, and provide rates of consistency under the sup-norm metric, as opposed to the mean squared error, so our statistical contributions are stronger. Our results rely on the mathematical formalism and tools of [CD19] but tackle material that was only alluded to and not analyzed in [CD19]. In this sense, our results complement theirs. The manuscript [CWRW15] is arguably the first contribution to propose estimating the persistence intensity functions using a sample of many persistence diagrams. The authors provide a bound on the MISE (mean integrated squared error), while in our paper we focus on the stronger and more challenging sup-norm guarantees. Furthermore, our analysis is non-asymptotic and arguably more sophisticated and leverages the mathematical results from [CD19], which was not available to [CWRW15]. About assumptions 3.2, 3.3 and 3.4: yes, they are satisfied in our experiments. Thank you for pointing this out. We will clarify it in our revision. Thank you so much for pointing out the reference by Bobrowski and Skraba about the fascinating conjecture of the universality of the noise distribution. We will include that reference in our revision. Their conjecture is that an appropriately standardized aggregate statistic of the points in the persistence diagram will have a universal limiting distribution. There are two key differences from our approach: (i) the asymptotic are different: Bobrowski and Skraba are concerned with the limiting behavior arising from one persistence diagram computed using an increasing number of sample points while we instead consider an increasing number of persistent diagrams, each evaluated with a fixed number of sample points and (ii) the conjectured universality is about the limiting behavior of an aggregate statistic, while we are focused on the entire distribution of the topological noise (as captured by the persistence measure).
Summary: This work develops a set of methods and theories for statistical inference for TDA based on samples of persistence diagrams: a. The work focuses on the estimation of the persistence intensity function. The work also proposes the novel persistence density function, which is the normalized version of the persistence intensity function. b. The work present a class of kernel-based estimators based on an iid sample of persistence diagrams and derive estimation rates in the supremum norm, which is stronger than the optimal transport distance norm. c. The work obtains uniform consistency rates of estimating linear representations of persistence diagrams, including Betti numbers, persistent surfaces, persistent silhouttes and weighted Gaussian kernels. d. The work presents several theorems, theorem 3.1 compares the L^\infty norm and the optimal transport distance in terms of controlling the estimation error; theorem 3.5, 3.6 show the kernel estimation error bound for the persistent density function and the persistent intensity function; theorem 3.8, 3.9 show the estimation error bounds for the linear representations. These theoretic results are fundamental and important, they lead to novel direction for statistical inference for TDA based on random persistent diagrams. Strengths: This work is very solid, and gives rigorous mathematical proofs. The formulations of key concepts, main theorems are clear and rigorous, the mathematical deductions for lemmas, theorems, corollaries are thorough and in detail. The theoretic results are convincing and impressive. Weaknesses: The work is highly theoretical, the heavy mathematical deductions are abstract. It will be more helpful for readers to digest if the authors further explain the motivations, the main proof approaches, the interpretations of the theorem, the potential direct applications of the results. More specifically, 1. It will be helpful for readers to better understand the article to give a table of symbols, list the major symbols and their meanings; 2. It will be helpful to give some figure to illustrate the concepts, such as persistent diagram; 3. Some math symbols and operators can be further explained, such as: a. The two symbols in line 145 are hard to differentiate, especially on a laptop screen, maybe the authors can emphasize the shuttle differences, or use different symbols; b. The formula in line 165, \|x-y\|_2^q needs more explanation c. The formula in line 247 in the supplementary, the operator Proj_{\partial\Omega} needs more explanation Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. The interpretation of the concept of random persistent diagram. For example, if we consider the MNIST data sets. Do we treat each image as a point cloud and build a persistent diagram ? Do we only consider the images of one digit or different digits together? Where does the randomness come from? Different writing styles of different people ? random noises in the imaging process ? 2. Theorem 3.1, the result is very general and can be applied in much broader fields. Does the inequality hold on general compact domains ? How tight is the bound ? 3. In the proof of theorem 3.1, please explain the current admissible transport plan . Is there other admissible transport scheme, which can lead to tighter estimation? 4. Please give some direct applications of the estimated persistent intensity/density functions, can we use it for generating persistent diagrams? recognize, authenticate, classify persistent diagrams in TDA? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: This work is theoretical, it mainly focuses on theoretical deductions. The limitations are not adequately addressed. It will be helpful if the limitations in practical applications are further discussed. 1. In reality, how difficult to satisfy all the assumptions listed in the paper ? 2. If the point cloud include several homology generators with similar birth and death times, may the current approach mix them and cause confusion ? 3. In theorem 3.6, from samples close to the diagonal, can we get more precise estimation ? Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the constructive comments. 1. **Experiments.** In the MNIST data set, we treat each image as a point cloud and construct a persistence diagram. Images representing the same number can be regarded as generated from the same distribution, while those representing different numbers are generated from different distributions. The noise comes from various factors like, as the reviewer mentions, writing styles or imaging process. 2. We appreciate that the reviewer recognizes the value of Theorem 3.1. In our version, the total mass of the measures $\mu$ and $\nu$ may be different, so the optimal transport distance is defined such that the mismatching mass may be transported to the diagonal $\partial \Omega$. In general, for any compact set $\mathcal{C}$ and any measures $\mu,\nu$ such that $\mu(\mathcal{C}) = \nu(\mathcal{C})$, it can be guaranteed that $$ \mathsf{OT}\_{p}^{p}(\mu,\nu) \leq [\mathsf{diam}(\mathcal{C})]^p \mathsf{Vol}(\mathcal{C}) \left\Vert p_{\mu}-p_{\nu}\right\Vert_{\infty}. $$ Here, $\mathsf{diam}(\mathcal{C})$ and $\mathsf{Vol}(\mathcal{C})$ represent the diameter and volume of the compact set $\mathcal{C}$ respectively. Besides, the upper bound we show in Theorem 3.1 is also tight: it can be verified that the equation is reached when we pick $\mu$ as uniform on $\Omega$ and $\nu$ as the null measure $\nu(\Omega) = 0$ -- as we will show in the next paragraph. 3. Again, we thank the reviewer for the recognition of the theoretical value of Theorem 3.1 and going through the details of its proof. As we explained in lines 268-270 in the supplementary material: "*Intuitively,* $\hat{\pi}$ *represents such a transport: at each point* $\mathbf{x} \in \Omega$, *if* $p_{\mu}(\mathbf{x}) > p_{\nu}(\mathbf{x})$, *then we transport the mass of* $p_{\nu}$ *from* $\mathbf{x}$ *to* $\mathbf{x}$, *and the remaining mass from* $\mathbf{x}$ *to its projection onto* $\partial \Omega$; *if* $p_{\nu}(\mathbf{x}) > p_{\mu}(\mathbf{x})$, *then the opposite is done.*" We believe it is not possible to find another admissible transport that leads to a tighter estimate. Consider the following example: let $p_{\mu}(\mathbf{\omega}) = 1$ and $p_{\nu}(\mathbf{\omega}) = 0$ for all $\mathbf{\omega} \in \Omega$. Essentially, $\mu$ is uniform on $\Omega$ and $\nu$ has zero mass. In this case, all the mass in $\mu$ would need to be transported to the diagonal $\partial \Omega$, and the optimal transport plan is to transport the mass on every point $\mathbf{\omega} \in \Omega$ to its projection onto $\partial \Omega$. It is therefore easy to verify that $$ \left\Vert p_{\mu} - p_{\nu}\right\Vert_{\infty} =1, $$ and $$ \mathsf{OT}\_{p}^{p}(\mu,\nu) = \int\_{\Omega} \left\Vert\mathbf{\omega} - \partial \Omega\right\Vert\_{2}^{q} \mathrm{d} \mathbf{\omega} = \frac{2}{(q+1)(q+2)} \left(\frac{L}{\sqrt{2}}\right)^{q+2}. $$ 4. The framework we proposed can be applied to classify two populations of persistent diagrams by comparing their estimated intensity functions. Representing distributions of persistence diagrams with intensity functions offer a very concrete way to compare them and quantify the magnitude of their differences. This is indeed a main motivation for our work. We will make this point explicit in our revision.
Summary: The paper tackles the problem of estimating the persistence intensity function, describing the distribution of rando persistence diagrams, and proposes a variant called persistence probability function that integrates to one. The paper starts with a theoretical analysis of the estimation error bound of the intensity function using the OT measure and the L-infinite norm, showing that the latter allows the definition of stricter bounds. The paper also proposes a method to estimate the persistence intensity function and the persistence probability function under the assumption of i.i.d. samples using a kernel density estimation approach. Strengths: Persistent diagrams are an important tool for characterizing topological structures (e.g. surfaces and graphs). Being able to estimate with high accuracy the distribution of such structures could indeed be beneficial to their analysis, with applications also to the learning domain. The paper, at least from a not-so-expert reader like I am, seems very rigorous in the theoretical analysis and provides in the sup. mat. all the proofs of the introduced theorems. Weaknesses: The main weakness (if we want to call it so) of the paper is that it is not easy to read by nonexperts of the specific topic. It is quite dense and mostly mathematical and does not provide many intuitive explanations of why some properties could be important from a practical perspective. In general, I would have appreciated a more gentle introduction to the problem and a wider introduction/literature review on the practical application/advantages of persistent intensity functions. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: I think that it would help to add some details in the introduction about the importance and applicability of the proposed tool, which is still not clear to me. In the conclusions, you state that statistical inference is not yet possible with the proposed method. Does this mean that it cannot be used in practice? Row 101: of problems Row 139: of the expected Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: I don’t foresee any particular negative societal impact. A discussion about the practical applicability of the method would be interesting. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Our methodology can certainly be used in practice. However, our statistical guarantees only show the consistencies of our estimators. In order to carry out more sophisticated inferential tasks, e.g. hypothesis testing and confidence sets, a more refined analysis is in order. For example, in order to compute an asymptotically valid confidence band for the persistence intensity/density function, it appears necessary to study the validity of the bootstrap or other resampling methods. This is a non-trivial task that we will leave for future work. We will improve the language in the introduction to provide better motivations for our contributions.
Rebuttal 1: Rebuttal: We would like to clarify a few important points about our paper that perhaps we did not express as clearly as we intended to. We will include additional text in the introduction and throughout the manuscript to make sure there will not be confusion. Our work is not intended to suggest an alternative framework to the current and prevailing TDA practices. Rather, we explore a statistically grounded approach whose main objective is to describe the *distribution* of a random persistence diagram and not any particular realization or target persistence diagram. As we are interested in capturing the overall randomness of persistence diagrams, we seek to represent both the topological signal and the topological noise, the latter being our primary target. Concretely, and using the example suggested by the reviewer, suppose that we are interested in the distributions of persistence diagrams originating from a uniform distribution and from a non-uniform distribution on the unit sphere whose density is, say, inversely proportion to the arc length distance from an arbitrary reference point on the sphere. The topological signature is the same in both cases (that of the unit sphere), but the topological noise is different, having different distributions. See **Figure 1 in the attached pdf file** for an illustration. In this paper we study a methodology, rooted in the literature on non-parametric density estimation, that is in principle able to identify and quantify such a difference using the highly interpretable quantity of persistence intensity function. We believe this contribution is neither trivial nor lacking rationale and in fact may hold the potential to yield new statistical methods for TDA. Secondly, the notion of persistence intensity function is neither due to us nor new, and has been considered and used before: it has been suggested by [CWRW15], used in practical applications, e.g. by [WNv+ 21], and recently formalized and studied in great detail by [CD19] in the TDA literature. Indeed our results complement the contributions of [CD19], who posed the problem of statistical estimation of persistence intensity functions. More generally, if one is inclined to treat a persistence diagram as a (complex!) point process, the intensity function is a natural object to investigate. Thirdly, in addition to focusing on distributional properties of persistence diagrams (as a means to express topological noise), our analysis differs from standard TDA approaches in that we assume different asymptotic behavior, by requiring the availability of a growing number of i.i.d. persistence diagrams and *not* of a growing number of i.i.d. data points to construct each observed persistence diagram. Finally, the methodology we consider is computationally inexpensive to apply, a feature that does not always apply to TDA methods. In conclusion, we believe that persistence intensity functions provide an alternative set of tools for statistical inference for TDA that is rather distinct and does not clash with existing TDA paradigms and, as such, is worth being investigated. In this paper we take the first step towards studying the validity and limitation of such an approach. We will expand the introduction and the main body of the paper to clarify our perspective and provide better context. Pdf: /pdf/264ab859f0ca24bd19c70eaa8a7a082262636f03.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Accelerating Motion Planning via Optimal Transport
Accept (poster)
Summary: With the aim of improving efficiency in motion planning, this paper proposes an efficient, gradient-free optimization method, MPOT. This is enabled by the introducing the Sinkhorn step, a zero-order parallelizable update rule that is guaranteed to converge under smoothness and boundedness assumptions. The authors perform empirical studies of the effectiveness of their method in 3 benchmarks. Strengths: - Motion planning is an important task that must be solved quickly while outputting smooth plans to enable the deployment of robotic systems in the real world. As such, this problem is a relevant one. - Both the Sinkhorn step as MPOT are described in detail and, to the best of my knowledge, are novel contributions to the field. - The paper is well written, well organized and clear. Weaknesses: - The benchmarking results presented in Section 5.2 correspond to two settings in cluttered environments where RRT*/I-RRT* do very well despite the running time cost incurred. Including benchmarks where RRT* fails would be interesting to explore further other potential advantages of this method besides run time complexity. - In Section 5.3, the authors only report results on MPOT and GPMP2, despite the fact that methods like SGPMP solve similar-sized problems in their experiments. A discussion on the limitations of application, or extension of the benchmark would make the case stronger within this setting. **Minor comments**: - In line 386, it should read “At last”. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - On what hardware is the time comparison from Table 1 being drawn? RRT*/I-RRT* will benefit from stronger CPUs, whereas the other methods have GPU dependencies, so a hardware comparison is important to understand the run time benchmark. - How does MPOT do in other benchmarks where RRT* fails to reach a solution? - Why is MPOT only compared to GPMP2 in the mobile manipulation experiment, instead of considering similar baselines to the previous cases? - Are there other known applications for the Sinkhorn step? The authors comment “sampling methods or variational inference”, but a brief slightly more detailed discussion of this in the paper could improve the contributions greatly. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: The authors have addressed limitations of their work to a good extent in Section 5. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you very much for your insightful comments and constructive feedback. **Regarding the comment about RRT\*/I-RRT\*:** Initially, our intention was to use RRT*/I-RRT* as an indicator of feasibility of the tested environments since they enjoy probabilistic completeness, i.e., at infinite time budget if a solution exists these search-based methods will find the plan. Optimization-based motion planners, like MPOT, GPMP2, CHOMP, and STOMP are only local optimizers. Therefore, if a solution cannot be found by RRT*/I-RRT*, then it is not possible optimization-based approaches can recover a solution. Please note that key issues with RRT*/I-RRT* are the computational complexity and the lack of smoothness (e.g., see the answer for question 2, about the performance of RRT*/I-RRT* for the TIAGo++ environment). Utilizing a motion planning strategy like MPOT that plans very high frequencies, we can potentially optimize in real-time and be more efficient in re-planning in dynamic environments, while the induced smoothness of MPOT makes the trajectory tracking from the low-level controller much easier. We plan to investigate reactive motion planning with MPOT in future work. 1. **What hardware is the time comparison from Table 1 being drawn?** All experiments are executed in a single RTX3080Ti GPU and a single AMD Ryzen 5900X CPU. Note that due to the fact that all codebases are implemented in PyTorch (e.g., forward kinematics, planning objectives, collision checkings, environments, etc.), hence due to conformity reasons, we also implement RRT*/I-RRT* in PyTorch. However, we set using CPU when running RRT*/I-RRT* experiments and set using GPU for MPOT and other baselines. 2. **How does MPOT do in other benchmarks where RRT\* fails to reach a solution?** In our experiments, RRT*/I-RRT* only fails to find solutions in TIAGo++ environment due to hitting the time-limit budget. We tried a very high time limit (1000 seconds); however, the dimensionality of TIAGo++ environment is too high for RRT* to explore to reach the narrow grasp point. MPOT manages to find a reasonable solution in a few seconds in this case due to being a batch gradient-free optimizer leveraging the effective Sinkhorn algorithm. 3. **Why is MPOT only compared to GPMP2 in the mobile manipulation experiment, instead of considering similar baselines to the previous cases?** We conducted additional comparative experiments on the TIAGo++ environment to better support our claims, and according to the reviewer's request, the results are available in the rebuttal attachment. Apparently, CHOMP performs worse than GPMP2 and takes more iterations in a cluttered environment. However, according to the new table result, CHOMP beats GPMP2 in run time complexity in TIAGo++ environment due to its simpler update rule. Following your suggestion, we tried SGPMP and observed that we could not tune SGPMP to surpass the success rate 5/20; hence we opted for the minimum iterations to achieve the mentioned success rate. The problem lies in how SGPMP explores with constant GP variance in a high dimensional setting; thus, a possible future extension of SGPMP with adaptive explorative variance could mitigate the problem. Similarly, STOMP's exploration mechanism is even more restrictive, and we could not tune it to work in this environment. Overall, with the efficient Sinkhorn Step facilitating individual waypoint exploration, MPOT outperforms the baselines in terms of run time complexity by a considerable margin in this case while maintaining reasonable smoothness and task performance. 4. **Are there other known applications for the Sinkhorn step?** To the best of our knowledge, we are the first to propose Sinkhorn Step - an optimization operator that utilizes the Sinkhorn algorithm, that connects zero-order optimization in non-convex objectives with Optimal Transport theories. We briefly mention our vision to apply Sinkhorn Step in other Machine Learning fields. For example, Sinkhorn Step can approximate gradients in SVGD [1] with a suitable choice of the polytope family. Another example in variational inference is to apply Sinkhorn Step to update a Gaussian Mixture Model as the proposal distribution to match the unormalized target distribution. [1] Liu, Qiang, and Dilin Wang. "Stein variational gradient descent: A general purpose bayesian inference algorithm." Advances in neural information processing systems 29 (2016). --- Rebuttal Comment 1.1: Title: Response to Rebuttal Comment: Thank you for the detailed rebuttal, the authors have answered all of my questions. Adding the extra mobile manipulation results will definitely make the experimental section stronger.
Summary: This paper proposes MPOT, a gradient-free method that optimizes a batch of smooth trajectories with nonlinear costs even for high dimensional tasks. In especial, a zero-order and highly-parallelizable update rule called Sinkhorn Step is proposed to facilitate the optimization process. MPOT outperforms the baseline methods with respect to the planning speed and success rate, across various tasks. Strengths: 1. This paper addresses a fundamental problem - trajectory smoothing in motion planning, and proposes a solid method. 2. The paper is self-contained and provide very detailed introduction of the method. It is well organized and easy to read. Weaknesses: 1. To enable the batch optimization, it seems that each sequence of optimizing points in the same batch are asked to share the same length, thereby forming a matrix $X \in R^{n\times d}$. Such operation kind of hinders the flexibility of the algorithm to process trajectories with quite different number of waypoints in the batch-wise manner. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. Please refer to the weakness. How do you process the sequences with different waypoint numbers? 2. How do you evaluate the planning time of other baselines, such as STOMP and GPMP2? Are other baseline methods also executed in batch? 3. If other baselines are not performed in batch, how do you justify the motivation of performing trajectory smooth in batch? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Yes, limitations are addressed by the authors. I see no negative societal impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their positive evaluation of our work. 1. **How do you process the sequences with different waypoint numbers?** Currently, for vectorizing the update of all waypoints across the batch of trajectories, we flatten the batch and horizon dimensions and apply Sinkhorn Step, then we reshape the tensor to the original shape. Notice that what really glues the waypoints in the same trajectory together after optimization is the log of the Gaussian Process as cost model, which promotes smoothness and model consistency. Given this pretext, in case of a batch of different horizon trajectories, we address this case by setting maximum horizon $T_{\textrm{max}}$ and padding with zeros for those trajectories having $T < T_{\textrm{max}}$. Then, we also set zeros for all rows corresponding to these padded points in the cost matrix $\mathbf{C}^{T_{\textrm{max}} \times n}$. By this way, the padded points are ignored in the barycentric projection. Intuitively, we just need to manipulate cost entries to dictate the behavior of waypoints. 2. **How do you evaluate the planning time of other baselines, such as STOMP and GPMP2? Are other baseline methods also executed in batch?** We further describe our experiment settings in Appendix I. We tune all baselines for each experiment and then measure the planning time T$[s]$ of baselines until convergence or till maximum iteration is reached. T$[s]$ is averaged over number of environment-seeds and number of tasks. - Striving for a fair comparison, in all cases, besides RRT*/I-RRT*, we reimplemented all baselines in PyTorch and fine-tuned them with the vectorization option. To the best of our knowledge, the baselines are not explicitly designed for plan vectorization. Thus, their associated public codebases are implemented for single-plan querying. - Regarding the vectorization of RRT and its variants, we found some works [1, 2] that focus on parallelizing specific algorithmic components of RRT*, such as graph operations or collision checking in a single planning instance. They resort to special hardware design (e.g., FPGA) or state-space grid discretization, limiting their application to general settings. Thus, vectorizing the whole RRT* algorithm pipeline is non-trivial and still an open question. We opted for serial computation of RRT* to affirm the environments' solvability due to its completeness property, while reflecting the performance gap between GPU-vectorization optimization-based algorithms and serial classical sampling-based algorithms. 3. **If other baselines are not performed in batch, how do you justify the motivation of performing trajectory smooth in batch?** As explained above, all baseline comparisons (except RRT*/I-RRT*) are performed in batch. On a further note, there are three interplaying factors that contribute to the solution diversity and, hence discovering better modes: - the step radius of Sinkhorn Step - the moderately-high initialization variances - the number of plans in a batch -> For using MPOT as an oracle for collecting datasets, these factors contributing to the solution diversity covering various modes, capturing homotopy classes of the tasks and their associated contexts. For direct execution, abundance of solutions vastly increases the probability of finding good local minima, which we can select the best solution according to some criteria, e.g., collision avoidance, smoothness, model consistency, etc. We will add such a discussion in the final version of the paper to better motivate the need of batch-wise trajectory optimization. [1] Bialkowski, Joshua, Sertac Karaman, and Emilio Frazzoli. "Massively parallelizing the RRT and the RRT." 2011 IEEE/RSJ International Conference on Intelligent Robots and Systems. IEEE, 2011. [2] Xiao, Size, Neil Bergmann, and Adam Postula. "Parallel RRT* architecture design for motion planning." 2017 27th International Conference on Field Programmable Logic and Applications (FPL). IEEE, 2017. --- Rebuttal Comment 1.1: Comment: Thank you for the authors' response. My comments and questions have been addressed. I would like to keep the rating as 7.;
Summary: The paper focuses on the optimization of motion planning problem, by introducing a gradient-free method that is parallelizable and smooth. The method first probes the costs at several vertices, then decides the optimization direction by aligning the weight matrix with the cost matrix. Furthermore, to enforce the weight matrix as a joint distribution, the method introduces Sinkhorn Step, which is based on an acceleration technique for optimal transport optimization. Results show that the method has significantly low planning time and a high success rate. Strengths: 1. Though including some technical mathematical concepts, the paper has good writing and is easy to understand. 2. Results show the method has significant empirical performance. 3. Technical tricks and limitations are well discussed, for example, Section 4.2 and Section 5.3. 4. Implementing most baselines in PyTorch seems to be a lot of work. This is also a contribution, since it provides a platform to compare fairly with previous methods. Weaknesses: 1. To me, 7 DoF is still not high-dimensional tasks (TIAGo++ should be if you are not using action primitives). So since the author claim that the approach could work better than the baselines in high-dimensional tasks, I would appreciate a systematic comparison to other baselines in more high-dimensional tasks, probably in addition to TIAGo++. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. How is D^P initialized at the beginning of the algorithm? I guess it is not that important since you will rotate it for each step, but I still would love to know. Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Limitations are adequately addressed. I do not see any significant potential negative societal impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate the positive feedback on our contributions. Regarding the reviewer's comments and questions: - **On the dimensionality of trajectory optimization problem**: We do not consider any movement primitives in our experiments. In all cases, we consider optimizing the full-state (the concatenation of position and velocity) trajectories batch-wise. As mentioned in the experiment description (Sec. 5.1), the state dimension (configuration position and velocity) is $d=4$ for the point-mass experiment, $d=14$ for the Panda experiment, and $d=36$ ($3$ dimensions for the base, $1$ for the torso, and $14$ for the two arms, and their velocities) for the mobile manipulation experiment. Note that the typical optimizing variable would be the whole trajectory, leading to variable dimension $T \times d$. In our vectorization setting, it would be batched $b \times T \times d$. Hence, even for the Panda case, typical full-state motion planning with multiple objectives, such as obstacle avoidance, self-collision avoidance, smoothness, goal-reaching, joint limit handling, etc., is inherently challenging due to many local minima and high computational cost due to many objectives. Notably, we design the TIAGo++ mobile manipulation experiment as a stress test, reflecting even more performance differences between MPOT and the baselines. - **The importance of full-state trajectory optimization**: Optimizing a full-state (position & velocity) trajectory is vital for many robotics tasks, e.g., obstacle avoidance or human-robot collaboration, where smooth motion is desirable. Full-state reference trajectory generally allows for the low-level controller to track more smoothly. In contrast, if the trajectory is position-only, it is typically challenging to tune the gain of the position controller to track the reference trajectory correctly with large jerks or lower jerks but inaccurate tracking. - **Additional baseline comparisons on TIAGo++ environment**: We conducted additional comparative experiments on the TIAGo++ environment to better support our claims, and according to the reviewer's request, the results are available in the rebuttal attachment. Apparently, CHOMP performs worse than GPMP2 and takes more iterations in a cluttered environment. However, according to the new table result, CHOMP beats GPMP2 in run time complexity in TIAGo++ environment due to its simpler update rule. Regarding SGPMP, we could not tune SGPMP to surpass the success rate of 5/20; hence we opted for the minimum iterations to achieve the mentioned success rate. The problem lies in how SGPMP explores with constant GP variance in a high dimensional setting; thus, a possible future extension of SGPMP with adaptive explorative variance could mitigate the problem. Similarly, STOMP's exploration mechanism is even more restrictive, and we could not tune it to work in this environment. Overall, with the efficient Sinkhorn Step facilitating individual waypoint exploration, MPOT outperforms the baselines in terms of run time complexity by a considerable margin in this case while maintaining reasonable smoothness and task performance. - **How is $D^P$ initialized at the beginning of the algorithm?** In this paper, we adopt the common regular polytope (i.e., simplex, orthoplex, and hypercube) known to exist for any dimension. Their vertex coordinate computations are well-known in Geometry literature. Hence, for completeness, we provide the construction description of polytope vertices in Appendix F. For actual implementation, we also provide the example code in the supplementary material. The vertex construction functions are found in `mpot/mpot/utils/polytopes.py`. Interestingly, we conducted an ablation study and showed that polytope structure and random rotation are essential for sample efficiency in terms of search directions compared to just sampling random search directions on an $d$-dimensional sphere, as reflected in Table 5 in Appendix J3. --- Rebuttal Comment 1.1: Title: Good job Comment: Thank you for fully addressing my questions. 36D is high-dimensional, and the new result is definitely impressive. I’ll raise the score to 7. --- Rebuttal 2: Comment: Please be sure to read the authors' response to your initial review and reply indicating the extent to which it resolves your initial questions and concerns.
Summary: The paper proposes a trajectory optimization method using Sinkhorn Step which is able to perform efficient gradient-free batch optimization with non-linear objectives. Strengths: Gradient-free motion optimization is an important area with broad potential applications. The contribution of the paper is novel. An original update rule is proposed, which significantly improves the converging speed and trajectory quality. The paper is well-structured. The experiment results are adequate and easy to understand. Weaknesses: It is unclear how the proposed method is compared to learning-based trajectory planning approaches, and also how large is the improvement of the proposed method compared to existing optimization methods when using learning-based trajectory predictions as initializations. Technical Quality: 3 good Clarity: 3 good Questions for Authors: See weaknesses. Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Limitations are well-discussed in the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for appreciating our paper contributions. Learning-based motion planning methods [1] typically utilize a dataset from previously successfully generated plans to learn generative priors for generating plans directly [2] or using rollout samples as initialization for motion planners [3]. Differently from the learning-based setting, we propose an optimization-based planner without any learning elements, and we conduct comparison experiments with other widely-used motion planners to support the paper's claims. Per the reviewer's suggestion, a learning-based setting can naturally complement MPOT to provide even better initializations, as currently, we only use GP priors to provide random initial smooth trajectories. However, this is considered for future work. In the current research, we focus on the proposition of the novel operator, the Sinkhorn Step, that allows us to tackle motion planning problems, resulting in an efficient trajectory optimization method producing smooth trajectories. This opens up exciting future directions for learning to plan applications, e.g., using MPOT as an expert for learning to plan or using implicit or explicit learned policies to improve MPOT further. [1] J. Wang et al., "A survey of learning-based robot motion planning," IET Cyber-Systems and Robotics, vol. 3, no. 4, pp. 302–314, 2021. [2] A. H. Qureshi, A. Simeonov, M. J. Bency, and M. C. Yip, "Motion planning networks," in IEEE ICRA, 2019. [3] J. Urain, A. Le, A. Lambert, G. Chalvatzaki, B. Boots, and J. Peters, "Learning implicit priors for motion optimization," in IEEE/RSJ International Conference on Intelligent Robots and Systems, 2022.
Rebuttal 1: Rebuttal: We thank the reviewers for their time and effort in reviewing our paper, and their constructive and positive evaluations of our work. Below are the main questions and concerns from the reviewers that we have addressed. We briefly state the discussions as follows (full answers can be found in the reviewers' rebuttal boxes): - **Comparison with learning-based motion planning methods** Learning-based motion planning methods typically utilize a dataset from previously successfully generated plans to learn generative priors for generating plans directly or using rollout samples as initialization for motion planners. In fact, the learning-based methods can naturally complement MPOT to provide even better initializations, as currently, we only use GP priors to provide random initial smooth trajectories. However, this is considered for future work. In the current scope, we focus on the proposition of the novel operator, the Sinkhorn Step, that allows us to tackle motion planning problems, resulting in an efficient trajectory optimization method producing smooth trajectories. - **On the dimensionality of trajectory optimization problem** We do not consider any movement primitives in our experiments. In all cases, we consider optimizing the full-state (the concatenation of position and velocity) trajectories batch-wise. As mentioned in the experiment description (Sec. 5.1), the state dimension (configuration position and velocity) is $d=4$ for the point-mass experiment, $d=14$ for the Panda experiment, and $d=36$ ($3$ dimensions for the base, $1$ for the torso, and $14$ for the two arms, and their velocities) for the mobile manipulation experiment. Note that the typical optimizing variable would be the whole trajectory, leading to variable dimension $T \times d$. In our vectorization setting, it would be batched $b \times T \times d$. Hence, even for the Panda case, typical full-state motion planning with multiple objectives, such as obstacle avoidance, self-collision avoidance, smoothness, goal-reaching, joint limit handling, etc., is inherently challenging due to many local minima and high computational costs for many objectives. - **Baseline implementation considerations** We further describe our experiment settings in Appendix I. Striving for a fair comparison, in all cases, besides RRT*/I-RRT*, we reimplement all baselines in PyTorch and fine-tuned them with the vectorization option. To the best of our knowledge, the baselines are not explicitly designed for plan vectorization. Thus, their associated public codebases are implemented for single-plan querying. - **Additional baseline comparisons in the TIAGo++ environment** We conducted additional comparative experiments on the TIAGo++ environment to better support our claims, and according to the reviewer's request, the results are available in the rebuttal attachment. Following the reviewers' suggestion, we tried SGPMP and observed that we could not tune SGPMP to surpass the success rate of 5/20; hence we opted for the minimum iterations to achieve the mentioned success rate. The problem lies in how SGPMP explores with constant GP variance in a high dimensional setting; thus, a possible future extension of SGPMP with adaptive explorative variance could mitigate the problem. Similarly, STOMP's exploration mechanism is even more restrictive, and we could not tune it to work in this environment. Overall, with the efficient Sinkhorn Step facilitating individual waypoint exploration, MPOT outperforms the baselines in terms of run time complexity by a considerable margin in this case while maintaining reasonable smoothness and task performance. In summary, we propose Sinkhorn Step - a batch gradient-free optimization operator formulated as an Optimal Transport problem, leveraging the efficient Sinkhorn algorithm. We then apply the Sinkhorn Step to trajectory optimization, resulting in Motion Planning via Optimal Transport (MPOT) method. MPOT is inherently a multi-modality motion planner that optimizes a batch of high-dimensional smooth trajectories on multiple non-convex objectives, exhibiting individual waypoint exploration for better local minima escape. Finally, we also preliminarily investigate the Sinkhorn Step theoretical properties under standard assumptions. Pdf: /pdf/9355167b16654274e5e69885deea6b5cff918ba5.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Intelligent Grimm - Open-ended Visual Storytelling via Latent Diffusion Models
Reject
Summary: The authors aim to generate a series of coherent images given a series of text prompts resembling a visual storybook. To do so, the authors focus on two fronts: (1) leveraging the Stable Diffusion model to generate the series of images and (2) generating a diverse dataset used to train the model on a range of styles. For generating a set of coherent images, the authors condition the Stable Diffusion on both the text prompt and a set of previously generated frames, both encoded using a frozen CLIP encoder. The text conditioning is passed to the U-Net layers via the standard cross-attention module and LoRA. To insert the image conditioning, the authors introduce a Visual Context Model resembling the standard text conditioning module. The diffusion model is then partially trained to generate images that are both consistent with the text prompt and previously generated frames. To attain images ranging in style, the authors construct a new dataset, named StorySalon, consisting of Youtube videos and E-Books. These raw storybooks are filtered and re-captioned to better align with the visual content of the storybook. Qualitative results demonstrate the ability to generate new storybooks on prompts generated by ChatGPT while quantitative results demonstrate improvements over simple baselines. Strengths: - The authors focus on an important task of generating a series of coherent images that follow a given text prompt. This could potentially be useful beyond a simple storybook creation, e.g., for video generation. - The Visual Context Module as a means for injecting image-level details to the denoising network is simple and intuitive and could be useful in other tasks. For example, in image editing where the desired edit cannot be easily described using language. - The visual results generated by StoryGen are impressive in comparison to the evaluated baselines. Weaknesses: **Dataset Creation:** - I have some reservations regarding the use of the term Human Feedback in Section 3.2.3 and Section 4.2. While ChatGPT was fine-tuned to align with human preference, I believe that using ChatGPT for generating additional prompts should not be considered Human Feedback. While this is an intriguing approach, I believe replacing Human with LLM is more reflective of what is actually done here. - Regarding the ablation study performed by the authors, it seems that “Human Feedback” leads to a quite negligible decrease in FID and I am therefore uncertain if this really contributes to the curriculum learning scheme. The authors mention that more stories can be added, but I would have expected to see a bigger improvement if this stage is truly important. **Evaluation:** - There are numerous essential evaluations that are missing from the current submission. Among these, the most important is a thorough evaluation and comparison of StoryGAN, Story-DALL-E, and AR-LDM. All three have publicly available code so an evaluation is needed to understand the improvement realized by StoryGen. - I am not sure that FID is a particularly interesting metric here since all evaluated methods in Table 1 use Stable Diffusion to generate the images. Moreover, I do not believe that FID is a good metric when trying to measure how much the image captures a given style, as is the goal here. Maybe a CLIP-based metric using a prompt depicting a style would be more appropriate here? - There are numerous ablation studies that I believe are required to understand the contribution of both the proposed architecture and dataset. - Architecture: - An ablation study on the Visual Context Module and whether a simpler conditioning is possible (see my detailed question below). - Was an ablation study performed on the BERT-like masking during the multi-frame fine-tuning? - Dataset - The impact of the visual-language alignment stage in preparing the dataset. The authors state that directly fine-tuning on the story narrative may be detrimental, but do not validate this claim. - Some additional evaluations could help validate the effectiveness of the method. - First, the authors claim that the method can be used to generate stories of arbitrary lengths (Line 106). It would be great to quantify this by generating stories of varying lengths and validating whether there is a loss in quality after a certain length. - One particularly interesting component of the method is the Visual Context Module, so I would have liked to see far more evaluations performed on it. For example, the authors mention that multiple frames can be used for conditioning by concatenating their CLIP feature. Some interesting questions that could help strengthen the importance of the component include: - How much was this evaluated? - How much does conditioning on more frames assist in temporal consistency? - How many previous frames can be concatenated without hindering performance? - Stating that the model achieves a significant improvement in the alternative models seems like a strong over-claiming when approximately 30 participants were used for the user study. A substantially larger pool of participants would be needed to truly quantify this improvement, especially since this is the only relevant metric used to evaluate the methods. Why are the reported FID metrics between Table 1 and Table 2 different? Is a different dataset used? I would expect only “without HF” to be different if both use the same dataset. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: I hope the authors can help clarify several questions that I have regarding both the method and the evaluations. **General Comments:** - A reference to Rombach et al. is missing when discussing Stable Diffusion (e.g., in Line 38 where it is first used). - First stated contribution is that the authors propose the task of open-ended visual storytelling, but this was a previously studied problem (as also mentioned in Line 44). - When describing the style transfer module and visual context module, the authors reuse the same notation for the weight matrices \(W^K\) and \(W^V\). To make the writing a bit clearer, the authors should use a different notation for each case since these are separate modules if I understood correctly. **Method:** - What happens if the image conditioning is simply done using the standard image-to-image technique where the noised latent is conditioned on the previous image? Adding an ablation study would help validate this design choice. If performing a thorough ablation study is difficult, providing intuition on the motivation for the Visual Context Module would help highlight the contribution to readers. - Where is the style transfer model illustrated in Figure 2? Is this the blocked labeled LoRA? The authors mention that this is a LoRA-like architecture. Could the authors kindly clarify what exactly the difference between the original LoRA design is and why this modification was made? - Regarding the Multi-Frame Conditioning: - According to Figure 2, it seems like StoryGen receives several previous frames as conditioning while Line 136 indicates that only the previous frame is used for conditioning. Could the authors kindly clarify this? - Similarly, in the general setting, it seems that multiple frames are used for conditioning. Therefore, would it be more accurate to revise the equation in Line 150 to indicate any number of \ell frames as conditioning? - In Section 5.1, the authors mention that the multiple-frame fine-tuning stage is done using a single conditioning image (Lines 256-257). Where do the multiple frames come into play? Is this really a multi-frame conditioning? **Small comments:** - Line 70: ou -> our - Line 142: generate -> generates, align -> aligns - Line 276: Propmt -> Prompt Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: The authors include a discussion on current limitations and potential societal impacts in the supplementary. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for the positive feedback on our task setting, architecture and results. Hope the response below will resolve your confusion and thus raise the score accordingly. We are always open to further discussion. - W1&2. The name and necessity of human feedback - Please refer to Q5 of the global response. We consider the manual filtering process as a reflection of human preference, and a larger amount of human feedback data will lead to further improvement in performance. - W3. Lack of comparison with StoryGAN, Story-DALL-E, and AR-LDM - Please refer to Q6 of the global response. They are indeed weaker baselines compared with SDM. - W4. FID metrics or CLIP-based metrics - Please refer to Q8 of the global response. We provide evaluation results on CLIP score and PickScore(CLIP-based) in the rebuttal PDF. - W5-1. Ablation studies on BERT-like masking - Please refer to Q7 of the global response. We provide quantitative and qualitative results in Table 2 and Figure 5(rebuttal PDF). - W5-2. Ablation studies on directly fine-tuning on the story narrative - We provide quantitative results on FID and CLIP-based scores in the rebuttal PDF. As shown in Table 2(rebuttal PDF), the model fine-tuned with story narrative has demonstrated a noticeable decline in performance, which agrees with our hypothesis that text aligning closely with the prompt style of SDM will yield superior image quality. - W6. Evaluation on generating stories of varying lengths - We provide a qualitative experiment result in Figure 3(rebuttal PDF). We agree with the reviewer, in order to train the model to generate longer image sequences, more training data and computation are required. - W7. Ablation studies on visual context module - The effectiveness of Visual Context Module has been demonstrated in the supplementary by comparing StoryGen-Single with StoryGen, quantitatively in Table 1 and qualitatively in Figure 2,3,4 (supplementary). - We present the FID and CLIP-based results on StorySalon test set of StoryGen with different numbers of conditioned frames in Table 3(rebuttal PDF). More condition frames will lead to quantitative improvement, but not very significantly. This also agrees with our convenient inference setting of using a single conditioned frame. - W8-1. A larger pool of participants - Thanks, we will be more cautious with the tone, and find more participants in the final version. In future work, we aim to construct an online platform, that enables to collect human feedback on a much larger scale. - W8-2. Results of Table 1 and Table 2 - Please refer to Q3 of the global response. - Q1. Reference on L038 - Thanks, we have cited this paper in L031. - Q2. The novelty of open-ended visual storytelling - Please refer to Q1 of the global response. - Q3. The notation abuse - Thanks! We will add additional footnotes for the classification of the projection matrices in the style transfer module and visual context module. - Q4. Condition the noisy latent on the previous image - Thanks for the suggestion, we have indeed tried some other methods including DDIM inversion and concatenating the noisy latent with the previous image, but none of them worked well. The intuition is that these methods tend to keep the layout of the image unchanged and the content(i.e. character appearance) different, which is against our problem scenario that the content is unchanged and the layout different. However, we agree that they are interesting directions to be explored in future work. - Q5. Details of LoRA-like architecture - To augment the expressiveness of LoRA, we use a larger intermediate representation dimension in its hourglass-like architecture. Due to this misalignment with the requirement of low rank, we refer to this module as LoRA-like architecture. - Q6-1&6-2. Inconsistency between Figure 2 and L136, L150 - Please refer to Q4 of the global response. - Figure 2 is to demonstrate the ability of our model to condition on multiple frames, and L136 and L150 describe the practical setting for convenience. We have added new experiments on multiple conditioned frames in Table 3(rebuttal PDF), which shows insignificant performance improvement. - Q6-3. The name of multiple-frame fine-tuning stage - Multiple-frame fine-tuning is relative to single-frame pre-training. We have added new experiments by conditioning on multiple frames, we will clarify this in our final paper. - Q7. Small comments - Thanks! We will correct the typos in the final paper.
Summary: This work proposes the model StoryGen for the task of visual storytelling. Visual storytelling is a task to generate a sequence of consistent images given a story (several sentences). StoryGen is a diffusion model taking in both image and text as conditions, and outputs an image consistent with the conditions. The training process includes pre-training for single image, finetuning for multiple image and finetuning with human feedback. On top of the StoryGen model, this work also provides a dataset, called StorySalon, which consists of 2k story books (30k well aligned text-image pairs). The overall structure of StoryGen model is simple. It is built upon existing well trained diffusion models and image/text encoders. To generate cartoon-like image, LoRA is adopted into the text conditioning module in a diffusion model. The author calls it the style-transfoer module. The parameters in LoRA are updated at this pre-training stage to give single cartoon image. Next is the multipe image fune-tuning. StoryGen conditions on both text and image, which is implemented by using two cross attention layers: One is noise input + text and the other is noise input + encoded previous generated image. After the second step, StoryGen if further finetuned on 100 high-quality stories. The author also spends some efforts to collect the StorySalon dataset. To begin with, the author downloads a huge number of videos and subtitles from online web resources with potential stories. Then give story-level description and visual level description for each story. The story level description is obtained by using dynamic time warping algorithm using subtitles. The visual level description is derived from ChatCaptioner. Finally, OCR method is applied to get potential videos captions. The experiment section shows that StoryGen model can give consistent and story-like output images, while other methods fail. Strengths: The paper is clearly written. The collected StorySalon dataset could benefit the research community. Weaknesses: There is not much technical novelty, and the experimental results are limited. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: - Line 192 mentions the 100 high quality books are added into the training set. For the third stage training, is it still using the datasets from previous steps, but with 100 more samples? Would it make more sense to only use the 100 high quality books for further fine-tuning? Also, the name (fine-tuning with human feedback) is confusing since there is neither human feedback nor reward model. Is there any human involved to give preferences of different generated outputs from the StoryGen model? - The human feedback fine-tuning stage doesn’t seem to help a lot in the quantitative scores. It would be better to include visual examples without human feedback fine-tuning. If the difference is too small, it would be better to re-structure the paper. - For the visual examples given in Figure 4, I noticed the boy with yellow hair shows in both stories. Does this boy appear a lot in the training set? Also, it would be better if the author could provide more than one examples for each story using StoryGen. Otherwise, it feels like the model is overfitting to the training data. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 4 excellent Contribution: 2 fair Limitations: - Since there is no human labeler involved in collecting StorySalon dataset and the total number of stories in StorySalon is only 2k, I’m concerned about its quality. - It is also unclear from the examples provided if the results is derived by just overfitting the training set. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your affirmation and appreciation for our writing and dataset. Hope the response below will resolve your confusion and thus raise the score accordingly. We are always open to further discussion. - W1. Not much technical novelty - Please refer to Q1 of the global response. - W2. Limited experimental results - We have presented additional ablation studies on further Human Feedback and Visual Context Module in Table 1(supplementary). These experiments prove that further Human Feedback fine-tuning with more data will lead to a better FID score, and Visual Context Module will also benefit the model. During the rebuttal period, we provide more qualitative and quantitative results in the rebuttal PDF. - Q1-1. The use of 100 books - In Human Feedback process, we are using the original dataset along with the 100 generated new storybooks, which contain around 600 more text-image pairs. In Table 1(supplementary), we scaled human feedback data to 700 stories and around 3k pairs, and achieved further improvement in terms of FID compared to fine-tuning with 100 more storybooks. - Compared with our whole StorySalon dataset, the improved human feedback data of 3k samples still appears to be a relatively small amount of data. Directly fine-tuning StoryGen with these storybooks may cause catastrophic forgetting and destroy the pre-trained text-image alignment in the latent space of SDM. - Q1-2. The name of human feedback - Please refer to Q5 of the global response. We consider the manual filtering process as a reflection of human preference. - Q2. The effects and visualization of human feedback - Please refer to Q5 of the global response. A larger amount of human feedback data will lead to further quantitative improvement. And Figure 2,3,4(supplementary) show that the model with augmented human feedback fine-tuning(3k samples) could produce images with better consistency and quality. So human feedback is effective, but requires a larger screening data to significantly reflect its value. - Besides, as presented in L189-L190, another potential advantage of Human Feedback is to avoid potentially scary, toxic or biased content, which can not be measured by quantitative metrics. - Q3. The yellow-haired boy and overfitting - Thanks for pointing out this. After checking the training set, a yellow-haired boy with a similar looking does appear in the data, though not frequently. However, we disagree with the comments that this refers to overfiting, as shown in recent work[1], diffusion models can potentially generate samples similar to seen images, regardless of the scale of the training set. - Besides, as shown in Figure 2,3,4(supplementary), the visual results of StoryGen-Single (no Visual Context Module) show explicit inconsistency compared with StoryGen and StoryGen-HF, which also proves that we achieve consistency in generation by Visual Context Module instead of overfitting. - [1] Carlini et al. EXTRACTING TRAINING DATA FROM DIFFUSION MODELS. - L1. The StorySalon dataset quality - There are human labellers involved in collecting the StorySalon dataset, which has been mentioned in L215-L216. In the data collecting process, we also thoroughly went through the dataset multiple times, manually checked and removed the frames that do not satisfy our demand but were not found by automatic filtering. - Please refer to Q1 of the global response for our data quality. Our StorySalon has a similar data scale on image-caption pairs compared with previous datasets, and surpasses them with a much larger vocabulary and a far longer average story length. - L2. Overfitting - Please refer to Q3 in this response. More qualitative results are presented in Figure 1(rebuttal PDF) for demonstrating the diverse generation ability of our model. --- Rebuttal Comment 1.1: Comment: Thanks for the author's response. This is an interesting work. I'll keep my score as weak accept.
Summary: The work focuses on the application of image generation based on a given story. Specifically, the proposed model is conditioned on the current sentence and prior generated images to ensure the story is engaging and coherent. A progressive training strategy is proposed to achieve a good model. To improve the proposed method, a new dataset is collected, while a set of human-verified generative samples are also utilized to improve the generated images. -------------------------- I acknowledge the author's effort in the rebuttal and have made changes to the review accordingly. Strengths: + This work demonstrates the possibility of generating visual storytelling images conditioned on the given stories. + A three-stage curriculum training strategy is proposed to train the proposed model. However, it would be great to demonstrate the limitation of training the model with multiple-frame (i.e., without single-frame pre-training) + The authors collected a large-scale dataset to enable model training for storytelling purposes. Weaknesses: - the technical contribution of this work is limited. Most of the components are not novel and the key contributions are the way it is combined to generate a plausible output. It is unclear what the insights generated from this work that is not previously obvious to the community. - The description of the new StorySalon dataset is limited. Specifically, it is unclear if the collected dataset has obtained legal consensus and properly handled copyright issues. - The work lacks a comparison with existing work, such as those introduced in line 44 and line 93-102. The two baselines in Table 1 are too naive as both are inherently limited to generate a fair comparison with the proposed method. Minor: - Fig 2 should clearly state that the Image encoder only considers a single previous frame to generate the next frame. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - Please justify what it means by "open-ended". Was the dataset divided in a manner that the training and validation/test dataset has distinct distribution or content? - The final stage of curriculum training strategy fine-tune the model with human feedback. Here, the stories are generated with ChatGPT and the images are synthesized with the same model. This creates a problem of bootstrapping machine learning and makes it unclear whether such an approach can benefit the model. A recent work [1] discussed the curse of recursion with generated data. I want the invite the author to discuss the problem of this. In addition, the results in Table 2 show that with or without HF has a small difference. Can the author also explain what does 0.19 translate to the differences in the qualitative results? [1] Shumailov et al. THE CURSE OF RECURSION: TRAINING ON GENERATED DATA MAKES MODELS FORGET. https://arxiv.org/pdf/2305.17493.pdf - In line 141, why is a small number of newly-added trainable parameters required and how does it impact the model performance/learnability? - In Table 1 and Two, why is the FID score in both table inconsistent? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: The paper (in supplementary) discusses that data bias is an issue that needs to address in this domain. Collecting a larger dataset for training is the solution discussed. This may be valid considering this work is still in the early stage of the research. I want to point out that the discussed approach is limited as (1) it is resource-consuming, and (2) it will face the problem of copyright in order to obtain a good dataset for training. The data ownership issue may be a major hindrance. Flag For Ethics Review: ['Ethics review needed: Privacy and Security (e.g., consent, surveillance, data storage concern)'] Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your affirmation and appreciation of our writing and dataset. Hope the response below will resolve your confusion and thus raise our score accordingly. We are always open to further discussion. - S2. Limitation of training the model without single-frame pre-training - We have stated in the manuscript of our submission (L152-L158 and L177-L182) that the Style Transfer Module trained via single-frame pre-training allows StoryGen to quickly adapt to the style of storybooks. Without this stage, it would be difficult to generate the first frame with the correct style. There is an obvious domain gap between the style of the real-world image and that of the training data for Visual Context Module, which will finally weaken the effect of multi-frame finetuning. - W1. Limited technical contribution - We disagree, please refer to Q1 of the global response. - Besides, our insights also lie in thinking about the limitations of existing generative models and how to extend them to new tasks, specifically how to collect appropriate data and design suitable architectures. - W2. Limited description of the StorySalon dataset and copyright issues - Please refer to Q1 and Q2 of the global response. We have presented a visualization of a portion of data in Figure 3, and we have also performed statistics on the categories included in StorySalon, as shown in Figure 1(supplementary), confirming the diversity of our data. More examples can be seen in Figure 2(rebuttal PDF). - W3. Baselines and comparison with existing work - Please refer to Q6 of the global response. Previous works are even weaker baselines compared with SDM and Prompt-SDM, so we do not compare with them. Besides, we compare our model with StoryGen-Single shown in Table 1(supplementary) and fine-tuned SDM in Table 2(rebuttal PDF), which are both stronger baselines. - W4. Illustration of Figure 2 - Please refer to Q4 of the global response. - Q1. The meaning of 'open-ended' - Please refer to Q1 of the global response. More detailedly, the meaning of 'open-ended' can be interpreted from two different perspectives: - On one hand, for previous Story Visualization and Story Continuation tasks, StoryDALL-E or AR-LDM tend to overfit datasets containing only a few characters, such as FlintstoneSV and PororoSV, and can not generalize to other datasets or prompts, that is, closed or occlusive. - On the other hand, SDM is pre-trained on large-scale text-image pair data, and can generate novel images with novel text prompts. Our StoryGen model inherits the prior knowledge of SDM and thus can extend to generate novel story frames with novel storylines. - Benefiting from StorySalon dataset and StoryGen model, we can prompt ChatGPT to generate a series of new storylines, and our model can generate new visual stories with new characters, not limited to only a few protagonists as in the past. This is why our new task is called 'open-ended visual storytelling'. - Q2. The problem of bootstrapping and the small performance difference with or without HF - Please refer to Q5 of the global response. A larger amount of human feedback data does lead to further improvement in quantitative performance. As for the bootstrapping problem, this has been shown effective in InstructGPT and ChatGPT, thus we follow similar training routines as those work. - Q3. The necessity of the newly-added trainable parameters and the impact - As stated in L140-L141, our StoryGen model can be regarded as a semi-frozen SDM with a small number of newly-added trainable parameters, which specifically refer to our proposed Style Transfer Module and Visual Context Module. The former aligns the style of images generated by StoryGen to storybook images, and the latter enables our model to use the generated previous frame as a visual condition for autoregressively generating a coherent story sequence. - Results in Table 1, and Figure 2,3,4 of the supplementary can illustrate the effectiveness of our proposed modules both quantitatively and qualitatively. - Q4. Inconsistent FID score - Please refer to Q3 of the global response. - L1. Discussion on resource and copyright problems for a larger dataset - Please refer to Q2 of the global response. The data processing pipeline contains multiple carefully designed steps to ensure the quality of data, but each step does not need to consume many computational resources. We will open-source our data processing pipeline to the community. Once researchers have suitable and feasible data sources, they can easily expand our StorySalon dataset. --- Rebuttal Comment 1.1: Comment: Thank you for the details respond to the review and additional results in the provided PDF. Can the author provide additional comments about the video data of the proposed dataset. While "we will release in the form of YouTube URLs" is a plausible practice, it has been evidenced that video could be removed from the platform and unaccessible by other researcher. This could potentially affect the reproducibility and availability to other researchers. About the bootstrapping problem, I hope the authors can provide more discussion based on the findings in [1]. Do also analyse the generated feedback and training size. The amount of data in ChatGPT is on different scale when compared to the dataset presented in this submission. It is hard to determine if the bootstrapping effect is positive or negative for this work. --- Reply to Comment 1.1.1: Comment: - Thanks for your comments. - Releasing datasets in the form of URLs is a common practise, for example, Youtube-8M [2], WebVid-10M [3] and VideoCC [4]. However, we agree with the reviewer that the videos can be removed from the website, thus we are currently expanding our StorySalon dataset, and will replace all YouTube video data with open-source ebook data registered under **CC BY 4.0 license** in order to completely eliminate copyright concerns. - Thanks for pointing out the paper, we have read through it in detail. As mentioned in the response, in our case, we do observe positive gains by doing human feedback, as shown in **Table 1 (supplementary)**. This might be due to the fact that paper [1] has only investigated MNIST, which is a significantly simpler dataset than the ones we are using. - In addition, we do observe that these gains are correlated with the volume of feedback data, our current human feedback data for fine-tuning only constitutes a mere **10%** of the StorySalon dataset, which is also significantly smaller in comparison to the LAION-5B employed in SDM pre-training. Thus we are actively developing a user platform to gather more user feedback, aiming to explore the positive improvement upper bound of human feedback on our StoryGen model and analyze whether a negative effect would emerge as human feedback expands to a certain scale. - [1] Shumailov et al. THE CURSE OF RECURSION: TRAINING ON GENERATED DATA MAKES MODELS FORGET - [2] Sami Abu-El-Haija et al. YOUTUBE-8M: A LARGE-SCALE VIDEO CLASSIFICATION BENCHMARK - [3] Max Bain et al. FROZEN IN TIME: A JOINT VIDEO AND IMAGE ENCODER FOR END-TO-END RETRIEVAL - [4] Arsha Nagrani et al. LEARNING AUDIO VIDEO MODALITIES FROM IMAGE CAPTIONS
Summary: This paper presents StoryGen, an auto-regressive image generator that leverages text and image conditioning. StoryGen incorporates a style transfer module integrated into the text-conditioning module, along with a visual context module. The authors also constructed a substantial dataset called StorySalon, comprising 2K storybooks and 30K text-image pairs. Strengths: 1. This paper constructs a new dataset StorySalon contains 2K storybooks and more than 30K well-aligned text-image pairs. The authors have invested significant effort into filtering the data, making it a valuable resource for advancing the field of story visualization. 2. The paper is well-written and easy to follow. Weaknesses: 1. The illustration does not align with the description provided. In line 135, the authors state that "StoryGen generates the current frame $\mathcal{I}_k$ by conditioning on both the current text description $\mathcal{T}_k$ and the previous frame $\mathcal{I}_{k-1}$, as illustrated in Figure 2." However, the left figure of Figure 2 shows the image conditioned on more than one previous image, which contradicts the mentioned conditioning approach. 2. The improvement of Human feedback appears to be trivial, as indicated in Table 2. The 0.19 FID score gap could potentially be attributed to different training seeds, which raises doubts about the significance of the reported improvement. (I do not agree with the statement that 200 stories are too small since the model is trained using 2k stories overall. It appears to be sufficient for human alignment and does not require an extensive amount of data.) 3. The FID score lacks precision in the test set, particularly with only 100 storylines. It is recommended that the authors expand the test set by including more stories to provide a more accurate evaluation. 4. The baselines SDM and Prompt-SDM are too weak. It is suggested that the authors compare StoryGen with finetuned or LoRA-finetuned SDM models using the same training settings to establish a more robust baseline for comparison. 5. The auto-regressive generation approach employed by StoryGen has already been proposed by AR-LDM. Consequently, the architecture design itself lacks novelty. 6. StoryGen is only conditioned on one previous image and does not utilize the corresponding caption of the previous image. In the depicted cases of Figure 1 and Figure 4, there is only one main recurring character. If multiple characters were present, StoryGen may struggle to ground the characters in the previous images. Furthermore, if a character does not exist in the previous image, StoryGen may face difficulties in maintaining consistency between frames. 7. The language understanding capacity of StoryGen appears to be weak. For instance, in the second case of Figure 4, the rabbit appears small in the fourth frame, whereas it should be as big as it is in the fifth frame. Additionally, in the sixth frame, the boy's hair does not become brighter as described in the caption. Moreover, in the seventh frame, multiple other boys are depicted with the same yellow hair, which contradicts the previous story setting. This limitation may stem from StoryGen solely relying on the only one previous frame $\mathcal{I}_{k-1}$ and not incorporating previous captions into its generation process. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: 1. StoryGen generates the current frame $\mathcal{I}_k$ by conditioning on current text description $\mathcal{T}_k$ and the previous frame $\mathcal{I}_{k-1}$. I am wondering why the authors do not use all historical frames $\mathcal{I}_0, \cdots, \mathcal{I}_{k-1}$ in this process. 2. StoryGen introduces a style transfer module $\phi_{text}$ and a single-frame pre-training stage. However, since the training set is constructed from multiple sources (as mentioned by the authors in Line 244), the style is inconsistent. Moreover, many LoRA style transfer plugins not only tune the cross-attn module but also the self-attn modules. Why not follow this setting and add LoRA layers in both self- and cross-attn layers? This way, StoryGen can also benefit from other style transfer LoRA developed by the community. 3. In Line 186, the authors mention randomly dropping some words in the text with a certain probability following BERT. I have doubts about this technique because there is no masked language modeling task in StoryGen, and such a technique may not be helpful. 4. I question whether the authors truly need to replace descriptive captions with story narrative text. The story in the storybook and the story generated by LLMs are both in the form of story narrative text, rather than descriptive captions. Using story narrative text to train the model may lead to implicit alignment with human performance. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 3 good Contribution: 3 good Limitations: 1. The author should provide examples of the constructed dataset showcasing different visual styles and character appearances. 2. The StorySalon dataset consists of 2K storybooks and over 30K well-aligned text-image pairs, which is smaller compared to datasets such as FlintstonesSV (24K stories and 123K image-caption pairs), PororoSV (14K stories and 74K image-caption pairs), and VIST (27K stories and 136K image-caption pairs). Despite the authors' claim that StoryGen can perform open-ended story generation, it remains unclear whether StoryGen can generate stories involving more complex scenarios with unusual entities. 3. Apart from human feedback, the authors have not conducted any other ablation studies to evaluate the effectiveness of their proposed techniques, such as word dropout and curriculum learning. 4. There are concerns regarding the legality of using web-crawled e-books. The authors should provide additional information about the sources of the e-books and clarify whether proper copyright guidelines were followed. Flag For Ethics Review: ['Ethics review needed: Compliance (e.g., GDPR, copyright, license, terms of use)'] Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your affirmation on our writing and dataset. Hope the response will resolve your confusion and raise our rating accordingly. We are always open to further discussion. - W1. Inconsistent illustration - Please check Q4 of global response. - W2. Limited improvement of human feedback - Please check Q5 of global response. - W3. The precision of the FID score - Please check Q3 of global response. FID on StorySalon test set can be considered to accurately reflect the distribution of generated results. - W4. Stronger baselines - As presented in Q1 and Q6 of global response, we choose SDM and Prompt-SDM as baselines, as no existing model owns better open-ended generation ability. We also provide results of fine-tuning cross-attn layers of SDM on StorySalon in Table 2(rebuttal PDF) and fine-tuning SDM with LoRA in Table 1(supplementary). StoryGen performs significantly better than them. - W5. Novelty of auto-regressive generation - Please check Q1 of global response. - We did cite AR-LDM in our work. As far as we know, it is an arxiv paper, thus should be treated as concurrent work. Moreover, despite both approaches consider autoregressive generation, there are critical differences between StoryGen and AR-LDM: - (i) AR-LDM uses the heavy BLIP model and integrates image information into text embedding, while StoryGen proves that CLIP image encoder can be directly used with an independent module to attend to image condition; (ii) AR-LDM has to train all the parameters end-to-end including LDM, CLIP and BLIP, while StoryGen favours lightweight training with far fewer parameters to optimise. - W6. Previous captions and character problem - We agree. Our current model may potentially suffer from multiple characters/character loss. - We have tried adding previous captions but this leads to the ignoring of image condition. We have further investigated the idea of multiple condition images, as shown in Table 3(rebuttal PDF). However, the performance gain is limited. We conjecture that this is due to our dataset prior, as we are training/evaluating the model on children's books, that naturally prefer simple stories with a single protagonist. This is an interesting question for our future research while generalising towards sequences of complex stories and natural images. - W7. Weak language understanding ability - We agree. This is because StoryGen adopts pre-trained SDM, that uses CLIP text encoder, which is known to suffer several limitations, for example, it can not distinguish complex spatial or quantitative relationships, and struggles with affirmative and negative. This challenge can be alleviated by stronger text-to-image models like Deep IF, which is regarded as future work. - Q1. Conditioning on multiple frames - Please check Q4 of the global response. We present FID and CLIP-based results on StorySalon test set of StoryGen with multiple condition frames in Table 3(rebuttal PDF). The performance gain is not significant, possibly due to our dataset prior towards simple stories with a single protagonist, which agrees with our convenient choice to use a single conditioned frame during inference. We will also include the multi-frame results in our revised paper. - Q2. Data style inconsistency and LoRA design - The data style inconsistency is trivial compared with the strong expressive capacity of LoRA. - Here, we aim to learn the style of storybooks with as few parameters as possible, potentially maintaining the original prior of SDM to the greatest extent. It is a cool idea to insert different LoRAs into the model, but we always need to fine-tune the Visual Context Module to maximize its ability when using LoRA with large style differences. So different LoRA designs have no difference in training. - Q3. BERT masking - Please check Q7 of global response. - Q4. The choice between descriptive captions and story narrative text - We provide quantitative results on FID and CLIP-based scores in Table 2(rebuttal PDF). The model trained with story narrative shows worse performance, which agrees with our hypothesis that texts aligning closely with the prompt style of SDM yield superior image quality. Therefore, during inference, we will first convert narrative texts into descriptive captions with LLM to maximize the generative ability of the model. - L1. Examples of StorySalon - We present a visualization of a portion of data in Figure 3, and we also show statistics on object categories in StorySalon in Figure 1(supplementary), confirming the diversity of our data. More examples can be seen in Figure 2(rebuttal PDF). - L2. The scale and diversity of StorySalon, and open-ended capability - Please check Q1 of global response. Although other datasets contain more stories, the amounts of image-text pairs are actually in the same scale as ours, because these datasets use a fixed story length (5 frames) while stories in our dataset are far longer compared to theirs. Besides, our dataset contains the largest vocabulary till now, thus it is clearly more diverse than these datasets. - The 'open-ended' capability is from two sources: (i) our StoryGen inherits the open-set prior of SDM and can ideally generate image sequences of objects, scenes of arbitrary category, i.e., an extension of image-based generative model; (ii) our storylines generated by ChatGPT can be varing drastically, beyond those seen in our training data. The diversity of our generated results is visualized in Figure 1(rebuttal PDF). - L3. More ablation studies of proposed modules - We have provided further ablation study of our proposed modules and curriculum training in Table 1(supplementary). The stepwise performance improvements show the effectiveness of our Visual Context Module and curriculum training including human feedback. More ablation results on word dropout are presented in Table 2 and Figure 5(rebuttal PDF). - L4. Concerns about the legality of e-books - Please check Q2 of global response. --- Rebuttal Comment 1.1: Title: Thanks for the author response Comment: Thanks a lot for the response. I would like to keep my score as it is.
Rebuttal 1: Rebuttal: We appreciate all reviewers for the valuable comments and feedback. Hope the following response can fully resolve the raised concerns. We will release all codes, datasets, and models for future research purposes. - Q1. Novelty and Contribution(ALL) We would like to start the rebuttal by elaborating our contributions in this paper: - (i) **On proposed novel task**. Unlike previous Story Visualization or Story Continuation performed under limited characters/vocabulary, we consider a more challenging yet exciting task, i.e. _open-ended visual storytelling_, that requires generative models to generate **coherent** story frames based on storylines given by users or LLMs under **open-ended** vocabulary, e.g. free-form storylines and novel characters. The diversity of our generated results is shown in Figure 1(rebuttal PDF). - (ii) **On our dataset and processing pipeline**. Previous FlintstoneSV and PororoSV datasets only contain 7~9 characters with limited vocabulary and story length, incompetent for open-set task. We design a complete data processing pipeline, screen data suitable for our task from videos and ebooks, and build **StorySalon** dataset with a quite large vocabulary, which contains thousands of characters from hundreds of categories. - (iii) **On architecture design and training scheme**. We aim to inherit powerful **open-set** text-to-image generation capability of pre-trained StableDiffusion. Benefiting from our designed Style Transfer Module and Visual Context Module, **StoryGen** can generate coherent story-style image sequences with given storylines by simply training a small number of parameters. Surpassing strong baselines, StoryGen proves the effectiveness of proposed modules. Overall, we present our preliminary efforts on initiating research on open-ended visual storytelling that has lots of potential room for further improvement. - Q2. Potential copyright issues(SemY&tcbg) - For video data in our StorySalon dataset, we will release in the form of YouTube URLs; and for ebook data, we obtain ebooks from global digital libraries registered under **CC BY 4.0 license**, e.g. Bloom Library. All books are open-source. The community can process raw data via our processing pipeline. - Q3. FID in Table 1 and Table 2(SemY&tcbg&zaiF) - Apologies for the confusion. Due to limited space, the explanation of experiment settings was included in L031-L071(supplementary), we will clarify this in the revised paper. - Concretely, in Table 1, we prompt ChatGPT to generate 100 storylines and perform visual storytelling conditioned on the storylines. FID score is calculated between the generated 100 visual stories (~600 frames) and the test set ground truth of StorySalon; - In Table 2, we use text of StorySalon test set as condition to generate image sequences. FID score is calculated between the synthesized test set (~200 stories, ~3K frames) and the ground truth test set. - Q4. Illustration of Figure 2 on #frames(4bmh&SemY&tcbg&zaiF) - Apologize for the confusion. Our proposed architecture is indeed flexible on conditioning multiple frames, which can be achieved by concatenating their CLIP features as visual contexts, as mentioned in L165-L166. - During rebuttal, we tested our model on more condition frames. As shown by FID results of StoryGen with multiple condition frames in Table 3(rebuttal PDF), more contexts do improve performance but not very significantly, we conjecture this is due to our dataset/task prior, i.e. simple stories for children. We will add the results and discussion to our final paper. - Q5. Effectiveness of human feedback(ALL) - Our Human Feedback process consists of two stages: Prompt ChatGPT to generate storylines, and manually filter out high-quality images from our generated samples, by simply giving YES or NO labels. This manual filtering procedure is thus a reflection of human preference. - We agree with the reviewers that adding small-scale human feedback seems to be not effective, and this is the reason for the limited gain brought by human feedback in Table 2. - In Table 1(supplementary), we expand the amount of data and obtain StoryGen-HF. It further improves performance compared to StoryGen-hf, suggesting that human feedback is effective, though it requires a larger scale annotation to reflect its value. Visual examples in Figure 2,3,4(supplementary) also show that human feedback can help to produce images with improved consistency and quality. - Q6. Baselines(ALL) - Considering the **open-ended** feature of our task, SDM is the only strong baseline. **StoryGAN** and **StoryDALL-E** naturally do not support open-ended task due to the lack of large-scale pre-training and outdated backbones, it is thus unfair to compare with their released checkpoints. **AR-LDM** has not provided pre-trained checkpoints, and as the model is trained end-to-end, our computation resource and dataset scale are insufficient for training it. - In Table 1(supplementary), we have provided an additional baseline **StoryGen-Single**, denoting SDM finetuned on StorySalon with Style Transfer Module. And StoryGen performs significantly better in FID. Please check the rebuttal PDF for more baseline results. Qualitative results in Figure 2,3,4(supplementary) also demonstrate that StoryGen outperforms StoryGen-Single in terms of inter-frame coherence, showing the effectiveness of Visual Context Module. - Q7. BERT masking(SemY&zaiF) - We treat this as empirical discovery, rather than technical contribution. We provide our experimental results during project developments, as included in the rebuttal PDF. - Q8. CLIP-based metric(4bmh&zaiF) - Thanks for the advice, we provide evaluation results on CLIP score and PickScore(CLIP-based) in the rebuttal PDF. But since CLIP is trained on real-world image-text pairs, these metrics explicitly prefer natural images, as shown by low scores of GT(cartoon style). So CLIP-based metrics can only be treated as a reference. Pdf: /pdf/7f22ca192a95d1f65e6451646dda03b918aedca0.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: The paper propose an approach for fine-tuning diffusion models for the task of story generation, where a model must generate frames for sentences in a story. To do so, they propose adding adaptors conditioned on both images and text into a pre-trained stable diffusion UNet. The authors also introduce a scraped dataset of 2k stories with 30k image-text pairs, which serves as the data foundation for their fine-tuning. Strengths: S1. The dataset of story text and images is a significant contribution, which can be useful for future work on visual storytelling. S2. The paper is generally well framed and motivated. Writing and presentation is in general strong and polished. S3. Simplicity of the method. The method uses off-the-shelf components and algorithms (LoRA, cross-attention, etc.) to enable new capabilities. I see this simplicity as a strength not a weakness. S4. Inclusion of human evaluation is a strength. Weaknesses: W1. Presentation. Figure 2 suggests that StoryGen is conditioned on all past frames. However, in reality StoryGen is conditioned on the most recent frame. W2. Results. It appears that the model is not able to preserve style and content as well as say DreamBooth. For example the stripes on the shirt in Figure 4 are not preserved. W3. Evaluation. It seems that the StoryGen model without human feedback model was not evaluated in the human evaluation (Table 1). Given that the ablation in Table 2 shows similar FID scores for StoryGen with and without human feedback, it is not clear if this step is really necessary. W4. Lack of baselines. StoryGen is specifically fine-tuned for the desired task, while stable diffusion is not. Given this, it is perhaps not surprising hat StoryGen greatly outperforms stable diffusion. Can some other baselines, perhaps based on the DreamBooth (with the subject in the first image representing the special [V] token) be used for a stronger baseline? This is just one idea; however, comparing against some prior work may help to elucidate the strength of the method. W5. Evaluation. Fine-tuning can sometimes hurt the generality of a model. How well are individual frames generated relative to the base SD model? Is it possible to do a human evaluation here or compute CLIP scores? Ideally, the story would be cohesive (frame-to-frame consistency) without degradation in quality of each frame. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Q1. Can the authors provide further discussion of why StoryGen without human feedback was not evaluated in the human evaluation (W3)? Is it possible to add this evaluation? I believe this would help contextualize the importance of the human feedback in the proposed method. Q2. Can the other concerns related to evaluation be addressed (W2, W4, W5)? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The paper does not directly address limitations in the main paper, which is an additional weakness. I suggest discussing failure cases or otherwise conducting a failure analysis. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for the positive feedback on our writing, dataset and simplicity of the proposed method. We hope the following response can fully resolve the raised concern and thus raise our score accordingly. We are always open to further discussion. - W1. Presentation - Please refer to Q4 of the global response. - W2. Comparison with DreamBooth - We agree that DreamBooth may perform better. However, DreamBooth is an optimization-based method that has to finetune the **entire** model for many iterations(\~20 minutes) on each reference image sequence (3\~5 images). While our method is learning-based, once trained, it can be efficiently applied to generate coherent stories without any special fine-tuning. - In fact, the idea of DreamBooth can also be integrated into our framework to enhance the ability to maintain the style and content via test-time adaptation. We treat this as future work. - W3. Evaluation on human feedback - Please refer to Q3 of the global response for details of these two Tables. - Considering that human evaluation for each model variant is very labour-intensive and time-consuming, we can only afford to do it on the experiments for comparison with state-of-the-art methods. While for ablation study on validating the effectiveness of our proposed module, we adopt the widely used FID metric, the experiment results can be seen in Table 1 of the supplementary material. - Please refer to Q5 of the global response on the analysis of further human feedback. - W4. Lack of baselines: - As presented in Q1 and Q6 of the global response, there is no existing model with the capabilities for open-ended visual storytelling, so we choose SDM and Prompt-SDM as two strong baselines. - DreamBooth has two insurmountable limitations for our task: First, it requires thousands of steps(\~20 minutes) of adaptation for each reference image sequence (3\~5 images), which is very time-consuming and difficult to extend to large-scale test data. Second, the input of visual storytelling only contains a storyline from users or ChatGPT without reference images, while DreamBooth requires 3\~5 reference images for optimization. Thus it can not be finetuned to be a baseline. - W5. Evaluation on individual frames: - Thanks for the advice. In fact, we have included the experiments for single-frame finetuned model, i.e., StoryGen without Visual Context Module in Table 1(supplementary), termed as StoryGen-Single. The FID score of StoryGen-Single is significantly better than that of SDM and Prompt-SDM. Figure 2,3,4(supplementary) also demonstrate that StoryGen-Single can generate story-style images closer to the given storyline than SDM and Prompt-SDM. - Please refer to Q8 of the global response for CLIP-based metric evaluation results. - Q1. Effectiveness of human feedback: - Please refer to W3 of this response and Q5 of the global response. Besides, as presented in L189-L190, another potential advantage of Human Feedback is to avoid potentially scary, toxic or biased content, which can not be measured by quantitative metrics. - Q2. Evaluation of performance: - As discussed above, we have provided quantitative and qualitative experimental results in the supplementary that further illustrate the effectiveness of our proposed method, demonstrating that our single-frame pretraining, multi-frame finetuning and human feedback gradually improve the model's performance. For more evaluation results, please check the rebuttal PDF. - L1. Limitations and failure cases: - Due to the limited space of the manuscript, we have included the limitations in supplementary (L114-L124) and provided the visualization of failure cases in Figure 4(rebuttal PDF). The first two cases show that StoryGen will produce low-quality characters when the character number in the image is large and the spatial relationship is complex. The 3rd case shows that CLIP text encoder is not good at counting(wrong number of objects, CLIP mistakes 3 hedgehogs to 2) and the last case shows the bias of our dataset towards page-like images with a crease in the middle. --- Rebuttal Comment 1.1: Comment: Thanks for the responses to my comments! However, I do still feel that an additional baseline that has some sort of task-specific training is necessary to contextualize performance. DreamBooth was one such idea, but other ideas could include LoRA fine-tuning a StableDiffusion model on the proposed StorySalon dataset. I am electing to keep my initial score of 5. --- Reply to Comment 1.1.1: Comment: - Thanks for your comments. But the results of additional baselines that you required have been provided in our supplementary and rebuttal. Please check **Q6** of the **Author Rebuttal** again. - The results of StableDiffusion with LoRA fine-tuning on StorySalon have been provided in **Table 1** of the **supplementary**, termed as **StoryGen-Single**. Besides, we have also included the StableDiffusion model with cross-attn layers finetuned on our proposed StorySalon dataset as another strong baseline, namely **Fine-tuned SDM**, in **Table 2** of the **rebuttal PDF appendix**. We compared our model with these stronger baselines on FID and CLIP-based scores, and the quantitative results have demonstrated that our full StoryGen can also significantly outperform these stronger baselines.
null
null
null
null
null
null
AmadeusGPT: a natural language interface for interactive animal behavioral analysis
Accept (poster)
Summary: This paper presented Amadeus, a natural language interface for interactive animal behavior analysis. To accommodate modern LLM (GPT3.5) for behavior analysis, the authors proposed to use an API document to constrain GPT3.5’s knowledge space. Furthermore, the authors proposed Dual Memory Mechanism to read and write behaviors (symbols) to enable correct long-term memory retrieval and overcame the limit of 4096 tokens. The authors demonstrated results on three standard mouse behavior datasets. Strengths: (1) The motivation is quite novel. The whole system could provide an unprecedented experience of animal behavior analysis through natural language guidance only. This paper pioneers a practical way to integrate LLM to task programming. (2) The paper is well written. (3) The quantitative result on MABE dataset demonstrates the effectiveness of the system. Weaknesses: (1) Although the paper has claimed “animal behavioral analysis”, the datasets used are mouse only. I wonder whether it is difficult to transfer the experience of this paper to other animals, e.g. monkeys, zebrafish, etc. The authors should discuss its applicability to other animals. (2) I want to know how many behaviors have been tested on this model. It would be better to list them in the supplementary files. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: (1) Can it extrapolate to behavioral codes that are never seen, and how are the atomic APIs distributed? I mean, the behaviors that could be handled seem to closely related to the capability of atomic API. If the atomic API could not measure animal overlap, would the system work with behavior like "mount" which is closely related to pixel/keypoint/area overlap? (2) What if define a symbol twice with slightly different or totally different descriptions? (3) Mistake: In Figure.3, “Cmpute the pose from the video” -> “Compute …” Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 4 excellent Limitations: The authors have addressed limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Firstly we greatly thank you for your rating and recognition of the novel approach and performance of AmadeusGPT. Below we address your questions the best we can, and also agree it will be very exciting to see this deployed on more behaviors. To address your noted **weaknesses**: - **(1)** There is nothing limiting the system to mice (i.e. core modules are SAM (generalist), animal pose with SuperAnimal (Ye et al. 2023 released weights that work on over 45 species therefore we used them), and CEBRA with also is a generalist new dimensionality reduction algorithm just published in Nature (Schneider et al 2023, code publicly released March 2023). We do agree perhaps the choice of only mice tasks was limiting, but these are the most common benchmarks. However, we have now tested another species for you. In particular, we provided a new demo with a horse video. We added local maxima detection and some atomic gait analysis and visualization functionalities, and demonstrated that with an addition of <100 lines of code AmadeusGPT can be asked, for example, to evaluate the symmetry of a horse’s stride (see PDF in overall rebuttal for the visual demonstration). - **(2)** The Sturman et al, benchmark data (EPM), the 9* MABe challenge tasks we show, and the MausHaus video that allows us to demo the SAM + animal pose estimation, and integrations with CEBRA. Of course in the future we aim to open this code up to users and have a web portal – there is nothing inherent in SAM, pose models, or CEBRA that limit this to mice, and we focused on spatio-temporal reasoning as from flies to macaques, interactions with conspecifics and objects are present (and see above). *-*We added 3 more tasks in response to another reviewer.* To respond to **questions**: - For **question (1)**, we agree the way we handle behaviors seems to be closely related to the capability of atomic API. Though it should only augment the underlying LLMs. In the early development stage, we found that many behavior tasks cannot be solved by naive GPT-3.5, possibly due to the lack of high quality training data for behavior analysis. Therefore, we tried to and will cover more atomic APIs by having behaviorists try our software. - Also, as we briefly mentioned in the text, this is a practical design choice for extensively using those atomic API or combinations of them. This is due to the limited code generation ability of GPT-3.5 and its context window. Adding constraints using atomic APIs, we are able to (1) reduce code hallucinations, (2) reduce token consumption of the LLMs, (3) encapsulate lots of pre / post processing that are needed for handling noisy real-world data. While more powerful LLMs such as GPT-4 (context window much bigger) becomes more available and cheaper, we will gradually relax the constraints from atomic APIs in the future. We also note behavior analysis is also subject to domain knowledge and the available LLMs might not be sufficiently trained to know these domains. Therefore, some constraints of atomic APIs will still remain even in the future. To your more specific question about “mounting” and “overlapping”, we do not see any problem for AmadeusGPT to handle overlapping and mounting, as they can be easily captured by the distance between animals. Though 3D keypoints and other kinematics should be needed to well distinguish two behaviors. However, imagine it could not handle overlap, then it probably wouldn’t work for mounting as mounting and overlapping will involve the generalization or specialization of the same set of atomic APIs. - For **question (2)**, we have a mixture strategy for long term memory, you are welcome to check section 3.3 and section 3.5 for more details. In short, for dynamic module loading, we rely on embedding of the query for storing and retrieving. In that case, a slightly different description might cause false retrieval. But since you are asking about our symbolic pointer approach for long term memory, defining a symbol twice is equivalent to overwriting an entry in Python’s dictionary. And whether the descriptions are similar or very different should not impact the retrieval as we used regular expressions to look for the symbol in <> and <||>. This way, the description is guaranteed to be retrieved correctly as long as the symbolic name is given correctly. - For **question (3)** We will fix the typo in the figure, apologies! Mistake: In Figure.3, “Cmpute the pose from the video” -> “Compute …” --- Rebuttal Comment 1.1: Title: reply Comment: I have read the rebuttal. I would like to maintain my initial rating. Good job. --- Reply to Comment 1.1.1: Comment: Thanks very much!
Summary: This paper proposes AMADEUS, a GPT3.5 powered system to perform animal behavior data analytics given a natural language user-given query and a video depicting animal behavior. The model works by using GPT-3.5 to generate python code which makes calls to a instance segmentation and animal pose model, as well as hard-coded modules to compute animal-object relationships and operates with their outputs to return results according to a given query. The proposed model is tested with 3 popular animal behavior analysis benchmarks, showcasing different domains, queries and tasks, showing behavior consistent with human annotators and surpassing existing baselines. Strengths: - Related work: The paper is open about closely related work, and assigns credit for the contributions that are used in the AMADEUS system. - Method: - I appreciate the discussion about the design of the API. It would be valuable to understand what would be the effect of not having made the specific choices mentioned in lines 150-159. - I appreciate the Dual Memory Mechanims proposed (includeing the illustrative example of limitations of short context windows). I would like to understand the main differences between that appraoch and the one used in Generative Agents, and a discussion on why their execution flow is more expensive. The memory mechanism also has valuable usability features, such as being able to store and retreive states. - Relying on LLM allows to provide complex data analysis queries that were not possible in previous methods, as well as follow up questions to a given data analysis. - The proposed method, besides flexible is high performing in existing benchmarks, consisting on varied types of queries, expected outputs and data domains. The method perorms above the state of the art, both qualitatively adn quantitatively, surpassing all submissions in the MABE 2022 behavior challenge. Weaknesses: Novelty: - The work is very similar to existing works using LLMs for computer vision tasks (ViperGPT, VisProg), with the main difference being that they are applied to the animal behavior domain. This is however reported in the related work section, and the work includes other components to handle long duration data. Scope: - This looks more like a systems paper, with many of the contributions already existing in previous works, and the main contribution being in combining these existing contributions for a new domain. I think that the paper does have value, but I am wondering if it is the right scope for NeurIPS. Method: - The performance on the method seems highly dependent on the data analysis modules, which are handwritten. How general are these modules to different kinds of animal behaviors? - I am suspicious of the robustness and stress testing section. The full system relies on GPT3.5, and that same model is used to stress-test the system. I would like to see how good is the performance (with/without rephraser) from independent researcher-given queries. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: NA Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: Limitations are mostly addressed. The biggest missing limitation is how general is this approach to other animal behavior analysis, given the strong dependence on the hand-designed modules. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for pointing out the high performance, flexibility, and new additions that our work brings that related work have not addressed yet (dual-memory), and also we thank you for praising our related works- its a quickly moving and exciting time for LLMs+computer vision! We have worked to address your noted weaknesses and address your questions below. Also re: relation to Generative Agents by Park et al 2023 came out on arXiv only 30 days before our work, so while it has some similarities we did not compare it head-to-head. The largest difference we see is it builds synthetic agents with code, while we work to analyze real-world video data directly (this is not a critique of that very cool work). Regarding the expense of the execution flow, since our symbolic pointer for long term memory is implemented using Python dictionary (i.e., hash table), the computational complexity for the lookup operation and insertion are O(1). In contrast, generative agents require LLM to calculate the embedding for every query and the embedding will later be used to retrieve relevant content in a vector database where the lookup operation likely takes O(Nd) where N is number of vector in the database and d is dimension of the vectors. To address your noted **weaknesses**: - (1, 2) Novelty & Scope: We agree there are great synergies with ViperGPT and VisProg, but there are some differences and we do cite the work. The technical difference that you also highlight as a strength is our novel dual-memory system, which we do think is broader than animal behavior. Yet, given neurips is at the intersection of ML and neuroscience, we felt this was also the proper venue for our work, we hope you agree! - (3) Method; we don’t fully constrain the LLMs based only our API, and we wrote the API to be quite “general” for laboratory animal behavior – of course, that is still a specialist approach – but we do think it shows a first applied real-world use of LLMs and SOTA modules in computer vision and machine learning that are linked by our system. Directly, it can be used on other animals, and also to address another reviewer we show this on another species, and note that nothing is hard-coded about mice, the hard-coded nature is about spatio-temporal reasoning, which spans animals. We hope that clarifies the aim and approach. - (4) We developed a web-App and invited external users to test the system with GPT3.5 models, thereby its OOD and stress tested. We analyzed the logs from 362 queries, 242 of which were from unique prompts (i.e., from manually typed ones rather than executing the demos). Out of the 362 queries, 329 were automatically rephrased. We found AmadeusGPT had reported 129 “errors”: 38 were caused by unsupported features or undefined variables, and were thus explained to the user, while 64 originated from programming errors. Therefore, from 362 queries there were an 18% programming error rate, and 11% unsupported feature requests rate. Then, we took 10 of the failed runs from external users and re-ran them with GPT4. Now, of the 10 that previously failed, only 3 failed, and 4 correctly output clarifying queries to the user*: 1: ['❌ Get angles and distances between all body parts. Plot a UMAP graph using the resulting data.', 2: '✅ Perform hierarchical clustering and plot the dots with different colors based on their clusters.', 3: '✅ Plot the distance between animals over time.', 4: '✅ What is the speed of the changes from the left to the right arm?', 5: '✅ Define <|relative_head_angle|> as the angle between the mouse_center and the head_midpoint. Plot the variation of <|relative_head_angle|> over time.', 6: '❌ Plot animal center of mass x-coordinate, velocity, acceleration, and head direction.', 7; '✅ Plot a bar graph with the first bar representing the total time the animal spends in ROI0 (open arm) and the second bar representing the total time the animal spends outside of ROI0 (closed arm).', 8: '✅ Plot the euclidean distance between the nose points of animal0 and animal2 over time.', 9; '❌ Define <|head_direction|> as the orthogonal angle to the line between left_ear and right_ear.', 10: '✅ Plot each grooming bout.', *For (2), (4), (5), (10), the output is a message indicating either that the feature is not implemented or that the query does not provide all information, and explains what would be needed either in terms of development or clarifications from the user. Thank you for this suggestion to stress-test further, and hope this overall response enables you to support our work! --- Rebuttal Comment 1.1: Comment: Dear Reviewer f7B3, Are there any clarifications we can provide for you regarding our rebuttal? Thank you for your time and efforts in this busy period. --- Reply to Comment 1.1.1: Comment: Apologies in advance to bother you, but given today is the last period of rebuttal clarifications, we want to be sure you saw our rebuttal. Thank you!
Summary: This paper proposes Amadeus, a novel natural language-based interface that leverages large language models like ChatGPT and vision-language models like SAM for animal behavior tracking and analysis. Amadeus leverages LLMs to generate code outputs, which can be executed to retrieve visual information and memory information and return requested responses from user queries. In particular, Amadeus proposes a novel dual-memory mechanism that combines short-term and long-term memory banks to effectively analyze extended contexts such as long videos. Experiments demonstrate that Amadeus achieve state-of-the-art performance on the MABE Behavior Challenge 2022 benchmark. Strengths: - The paper is generally well-written, and the figures are generally informative about the details of each module of Amadeus. - Leveraging recent advancements on LLMs and VLMs to analyze animal behaviors is a novel idea and holds significant potential for scaling up to more complex animal behaviors and a larger number of animals in the future. One of the strengths of this approach is its accessibility and training-free nature, which makes it adaptable to novel scenarios with relative ease. Weaknesses: In Fig. 4, the implementation of long-term memory storing and retrieval processes is a bit unclear, as the generated code does not explicitly store or retrieve information into / from the long-term memory bank. While the proposed approach achieves the state-of-the-art results on the MABE Behavior Challenge 2022 benchmark, it would be beneficial to include discussions on the failure cases, and analyze their source of failures (e.g., due to mistaken VLM perceptions, or due to mistaken code generations, or due to compounding errors caused by the wrong values pushed to the memory banks, etc.) Technical Quality: 3 good Clarity: 3 good Questions for Authors: See "weaknesses" Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Limitations are mostly sufficiently addressed. Another limitation to add is that the current approach is bottlenecked by the capabilities of LLMs and VLMs (e.g., SAMs) being used. Perception errors can occur due to the limited capabilities of current VLMs, and the ability to produce correct responses given more complex queries and more demanding tasks is bounded by the capabilities of current LLMs. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for noting our novel idea and future potential to scale more broadly, we agree and are excited about this prospect. We also appreciate your noted weaknesses and do our best to address them. To address your noted **weaknesses**: - We will clarify Figure 4 by noting the generated code is examples without the memory (i.e., it makes mistakes) vs. with the memory generated code, and additionally add a diagram that makes the flow clear (add <|keyword|>, this goes to memory so tokens can be used otherwise, and recall with keyword). Conceptually, the storing and retrieving process are implemented as a default system behavior that is being executed silently at every iteration of the question-answering and we described the implementation details in section 3.3 and section 3.5. To clarify further, the reason we implemented storing and retrieving as a system behavior instead of API calls in the generated code is that we did not find them necessary to be included into APIs because they do not need to adapt to user inputs and we tried our best to save token consumptions. - We will also show mistakes in the Appendix, as of course it does not reach 100% performance on the task, and this can come in two flavors: (1) Amadeus outputs unusable code, or (2) the code cannot capture all the behaviors in the video. Based on external testing, the rate of code failure was 18% and 11% unsupported feature requests with GPT3.5, and only 10% errors with GPT4 (note we used GPT3/5 for MABe results). For MABe results, we ruled out failure cases caused by errors of code generation because we have self-correction mechanism that can fix obvious code generation errors for social behavior APIs used in MABe and we cache the correct code when iterating the code through 3000+ video data (this is also, of course, to save the budget of running tens of thousands of ChatGPT API calls). We also agree in general with your noted limitations of LLMs, but hope these additions and clarifications will help you increase your rating of our work! --- Rebuttal Comment 1.1: Title: Reply Comment: Thanks authors for the rebuttal! My concerns have been sufficiently addressed and I keep my original rating. --- Reply to Comment 1.1.1: Comment: Thanks very much!
Summary: This paper presents a novel interface and deep-learning system that enables interactive animal behavioural analysis using only natural language, on tasks that previously required extensive coding expertise and technical machine learning knowledge. The proposed framework integrates various modules based on LLMs and existing vision networks dedicated to various parts of the task, such as understanding the data processing API and rephrasing user prompts. This work also contributes a novel long-short-term memory technique in the context of LLMs. In combination, the contributed interface and systems achieves state-of-the-art performance on established tasks in the domain of animal behavioural analysis, and has the potential to support domain experts in using neural techniques and software frameworks that are currently difficult for them to use. Strengths: - Enables the novel user experience of analyzing animal behaviour using only natural language, which could be an important affordance for domain experts - Well-designed, workable end-to-end system that reasonably combines multiple deep-learning modules, all with dedicated purposes in various aspects of the task. - Effective use of SoTA techniques in deep learning to achieve high performance on established domain-specific tasks - Introduced a novel long-short-term memory module that is compatible with LLMs and shown effective usage of such module in the contributed framework Weaknesses: - **Unclear general applicability of the long-short-term memory module and the framework in general:** From a machine learning perspective, the most significant technical contribution would be the long-short-term memory module in the context of LLM that the author(s) developed. The current paper discusses this module primary in the context of animal behavioural analysis. I would like to see some discussion of the general applicability of the memory module to common domains utilising LLMs, and also the general applicability of the entire framework to other domains, such that researchers working on those domains can benefit from the technical contributions of this paper. - **Unclear generalizability of the work due to author-crafted prompts:** This is a limitation the author(s) have acknowledged, such that the experiments performed by the author(s) all followed prompts crafted by the author(s). I also acknowledge that the author(s) have attempted to alleviate this issue by introducing OOD base questions and using similar task descriptions as the original definition in the MABE challenge. However, I am curious about the opinion of the end-user (i.e., animal behaviour analysts/specialists) on the ability of the system in handling their prompts. It would be great if the author(s) can include some study (could be formative/informal) and/or discussion regarding the ecological validity of the system. It would be great if the author(s) can report lower-level success metrics, such as the rate of having syntax/compilation errors of the analysis code generated by Amadeus. - **Limited technical novelty and contribution for Machine Learning**: Related to the first point, the primary technical contribution of the paper in terms of modelling techniques is the long-short-term memory module, which is limited. I believe more extensive discussion and/or experiments on this module could be helpful for researchers in the ML community to build upon the findings of this paper. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: 1) What are the general applicability of the long-short-term memory module and the entire framework to domains other than animal behaviour analysis? 2) Did any animal behaviour experts tried using the system / reviewed the prompts in the experiments of this paper? If so, what are their opinions on the ecological validity of them? 3) What is the more low-level success metrics of the code generated by the system, such as the compilation error rate? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 4 excellent Contribution: 2 fair Limitations: The limitations of this work were adequately discussed by the author(s). Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Firstly we would like to thank the reviewer for pointing out the novelty of our overall system and the novel dual-memory system. We’d like to add that we aimed to be generous in related works, we are one of the first systems to show LLM-generated executable code and error correction, and our Rephraser model we had not seen before (of course since early May more works have come out). Thus, we want to push back a little that this paper is just a summation of many parts. We hope below (and response globally and to other reviewers) will convince you this is a paper for not only the machine learning crowd, but also for neuroscientists. To address **weaknesses**: - **(1)** This is a fair point that we only discuss or test our system in the context of animal behavior. We can easily speculate that this is much broader, there is nothing inherent about it being related to behavior, but we wanted to be careful to not over claim this in the manuscript. To speculate, many applications that demand the system to retain distant context for tasks like answering questions or ongoing learning. Thus, we believe the general capability of this framework is also a good contribution to people working in other domains. Our modular design and philosophy of the atomic APIs vs. integration APIs can provide interesting references to people who want to build a flexible, extendable analysis framework using LLMs and machine learning models and open source libraries. - **(2)** To probe the system we now have released a fully usable front end user interface to Amadeus with the 3 demo datasets used in the submitted manuscript (EPM, MABe, and MausHaus videos) and asked neuroscience behaviorists to test it - -namely, we added example prompts they could run and a chat-interface to ask their own questions. Over 100 signed up in 48 hours, and we admitted 30 as alpha testers who stated they worked with laboratory animal behaviors. We analyzed the logs from 362 queries, 242 of which were from unique prompts (i.e., from user typed rather than executing the demos). Out of the 362 queries, 329 were automatically rephrased. We found Amadeus had reported 129 “errors”: 38 were caused by unsupported features or undefined variables, and were thus explained to the user, while 64 originated from programming errors. Therefore, from 362 queries there were 18% programming errors, and 11% unsupported feature requests. The users also had the option to give a “like” or “dislike” about if the output matched their expectations, and out of 51 responses 6 dislikes reported to the fully-executed outputs, 18 dislikes to code errors, which of course is expected, and 33 likes. Thus, while of course there is room for improvement, we hope this generally addresses your excellent point on generalizability to real-world users. - **(3)** We do appreciate that our main technical contributions relate to the novel dual-memory system, but we also think the whole integrated system of multi-LLM models working together to write, refine, and explain results after the python interpreter is sufficiently novel, especially at the intersection of neuroscience and machine learning. We certainly will add to the discussion/paper about other domains of interest for this system-level approach too. We will also add more about our memory module: Implementing short-term memory via a deque for LLMs is straightforward. However, our focus is on long-term memory. Two main approaches exist: 1. Altering the transformer architecture, 2. Using external storage like vector databases seen in generative agents and recent works. Approach 1, altering architecture, is the natural way to extend LLMs' memory span. It requires further investigation to determine if the extended span affects language task performance, a topic we plan to explore. Note that this might incur higher costs compared to GPT-3.5 API calls. For Approach 2, seen in projects like Auto-GPT, a vector database serves as external storage for context embeddings. This method can be expensive due to embedding computations and a vector database lookup operation complexity of O(Nd), N being vectors and d their dimension. Our long term memory is using a mixed strategy: For long text such as source code, our dynamic module loading is similar to those works that store and look up embeddings using vector databases. The dynamic module loading fetches the most relevant integration module based on the query of the user, providing a flexible way to collect useful integration modules from the community (2). However, we also noted that for interactive behavior analysis, if the user just wants to retrieve some content such as the definition of the behavior, our symbolic pointer-based long-term memory is also an effective solution. Firstly, using symbolic pointers, there is no need for LLMs to calculate the query embedding every time, thus lowers financial and computational burdens. Additionally, as definitions of many behaviors are highly similar in text and fetching context based on embeddings of description can be error prone. Our symbolic pointer approach uses regular expressions to look for what is in the special tags such as <> and <||>. It guarantees that the fetched description is correct as long as users keep track of the used symbol names, as programmers need to remember the variable names. Secondly, since our symbolic pointer approach implements long-term memory using a hash table, it provides cheap insertion and lookup operation with complexity O(1), a sharp contrast to O(Nd) with a vector database. In summary, we are one of the first works that propose long-short-term memory for this kind of system and our mixture strategy for long-term memory is a good practical solution. We hope the above responses answered your **direct questions** (see points **3**, and **2** respectively). We thank you for your expertise, and hope our new analysis and clarifications enable you to find our work now acceptable, thank you! --- Rebuttal Comment 1.1: Comment: Dear Reviewer ZigY, Are there any clarifications we can provide for you regarding our rebuttal? Thank you for your time and efforts in this busy period. --- Rebuttal Comment 1.2: Comment: Thank you for the author(s) detailed rebuttal and the extensive work in testing involving real expert users, which addressed some of my concerns. As a result, I raised my score from 4 to 5. I suggest the author(s) to include these changes in their revised paper, if accepted. --- Reply to Comment 1.2.1: Comment: Thanks very much!
Rebuttal 1: Rebuttal: Firstly we’d like to thank the reviewers, all who noted novel (or appreciable) advances with our use of LLM for behavior and our dual-memory system for LLMs: - “Overall I found the contribution novel, of potential immediate utility to academic neuroscience labs performing behavioral analysis, and as an example for other groups hoping to automate analysis in other systems” - Reviewer XjKy - “novel long-short-term memory module,” … “Enables the novel user experience of analyzing animal behavior using only natural language” - Reviewer ZigY - “Amadeus proposes a novel dual-memory mechanism that combines short-term and long-term memory banks to effectively analyze extended contexts such as long videos” - Reviewer eSvp - “The paper is open about closely related work, and assigns credit” … “I appreciate the Dual Memory Mechanisms proposed (including the illustrative example of limitations of short context windows)” … “The method performs above the state of the art, both qualitatively and quantitatively” - Reviewer f7B3 - “The motivation is quite novel. The whole system could provide an unprecedented experience of animal behavior analysis through natural language guidance only. This paper pioneers a practical way to integrate LLM to task programming.” - Reviewer b8BG Secondly, we thank the reviewers for providing constructive feedback, which we address individually below. In summary, the major points are: - we provided a new analysis of the system from external users (here anonymous, but from behavioral experts) - we added more behavioral examples beyond mice, i.e., we added horses, which is also attached as a new figure PDF - we added three additional MABe challenge tasks Overall, we are thankful for the reviewers support, and we hope these additions will improve your impression of our new system. Pdf: /pdf/5c4c18e3069184d139091c6f90e28f9cc878a966.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: This submission introduces a system, Amadeus, for performing behavioral analysis on animal videos. This system combines three elements: an API with descriptive docstrings for performing common behavioral analysis tasks, a modified version of GPT3.5 with an enhanced context window, and a prompt tuning of this GPT3.5 to allow for construction of task programs for behavioral analysis using a natural language interface. What is claimed is that (1) the language model can extend beyond its nominal 4096 character token window (2) the system allows for behavioral analysis with no additional code written (3) the system is easily extensible. The authors’ provide schematics of the system architecture and the integration between various components. There are numerous examples for how written code is produced from natural language. There are a few evaluations provided (1) comparing human raters and the amadeus system in computing time spent in arms of an elevated plus maze (2) comparing the performance of the system in a select number of tasks from the MABE challenge and (3) comparing the ability of a trained rephraser model to domain adapt queries to the developer’s syntactical patterns. There is a fairly comprehensive inclusion of the API in the appendix as well as examples of the rephraser module. Overall I found the contribution novel, of potential immediate utility to academic neuroscience labs performing behavioral analysis, and as an example for other groups hoping to automate analysis in other systems. I however found the evaluations and description of the methods somewhat lacking, and the generalizability and extensibility of the somewhat unclear. These would need to be improved for me to strongly endorse the paper. Strengths: • This is the first LLM integration I have seen in the neuroscience and I think the approach is potentially interesting. In some ways I think it could be even more interesting for codebases that analyze on standardized experimental equipment (e.g. fluorescence imaging, neuropixels) because the set of tasks to perform is arguably more standardized than behavioral analysis, where the approaches, species, and questions can be fairly diverse. • Many researchers in the life sciences do not have a formal background in writing code or have to interact with a fairly specialized API to analyze data, and so assistance is useful here. • The system schematics and examples are clear. • The API documentation is helpful, and the integration of state of the art systems like SAM into an open-source (if this is to be distributed) codebase is helpful. Weaknesses: • The manuscript is missing descriptions of the system architecture and training details. It is difficult to fully understand how the system was trained without this. • The evaluations are somewhat cursory. The results on the EPM are hard to evaluate because the ‘ground truth’ from the human raters is quite variable. The MABE evaluation is only presented across a subset of tasks. • The generalizability and extensibility of the system to new users and new behavioral tasks is unclear. Part of this is the rephraser example is somewhat limited, but it is also unclear how successful new users will be in writing functions with appropriate documentation and that documentation hints are not interdependent. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: • Table 1 only lists a subset of MABE tasks, the model results should be reported across the full benchmark. • The lack of methods makes it difficult to know the details of the memory augmentation approach and the extent of training of the rephraser module (and how much one expects the later to generalize). • I found the memory augmentation result under-contextualized and other work in this space should be discussed. Augmenting LLM memory window is a subject under intense technological development and the contribution should be put into proper context, especially because it is a secondary result. • The EPM results are hard to evaluate because of inter-human variability. An experiment with clearer ground truth would help. • In general, hallucinations are a real problem for LLMs and I am not convinced by the given evaluations that they would not occur (e.g. for what fraction of a given call does the system produce the correct result). This is especially problematic for a system designed to be used by people who do not code and may not be able to debug problems. • The API is fairly expansive and includes example of most functions call and keyword arguments present in the given examples in the paper. This could mean that the system is limited in functionality to simply modifying parameters and numerical values given as input to the functions. Some statement about the expected generalization ability of the approach and the requirements for adding new functions to the docstring (e.g. does one have to provide examples of every new keyword input) is needed. • The claim that using the system would obviate the need to be able to code is strong. As the authors’ note, hallucinations are a problem for LLMs. Many in the field regard LLM solutions as an accelerated rough draft to be ‘fact-checked’ by a domain expert. Similar to the rephrasing analysis, it would be nice to have a statement about the reliability of the results, ideally across a pool of fairly naïve users. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 3 good Contribution: 3 good Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for noting the novelty and potential strength of our application to life sciences, and thanks for the suggestion to consider more experimentally constrained (i.e., imaging) settings, which we can try in the future. Here, we focused on classical behavioral tests, mostly for this reason, as many labs deploy these rather standardized setups worldwide (i.e., EPM). To address **weaknesses**: - **(1)** We do not fine-tune GPT3.5 (or GPT4) or the segmentation (SAM) or pose estimation models (SuperAnimal). Our system consists of three GPT3.5 instances with different system prompts, the pre-trained computer vision models, and our API implementation, we will clarify in revision. Re: system architecture, we hope Figure 2 gives a comprehensive design of the system, and more details about API design and system prompt design are provided in Appendix. - **(2)** EPM, and mouse behavior in general, is somewhat subjective, hence the absolute ground truth is a mixture of experts. While we fully appreciate this is not always satisfying, this is a real-world application and challenge Amadeus is likely to be faced with. MABe contains the experimentally-defined tasks and expert-annotated behavior tasks, of which 9 of 13 could likely be solved by pose estimation and spatiotemporal reasoning, and the other 4 are not feasible without a ML classifier, such as predicting the animal strain, etc. We now added the 3 other tasks that Amadeus could tackle, which we will add to the final version: | | T4 approaches | T10 oral-genital-contact | T11 oral-oral-contact | |---------------|---------------|-------------------------|-----------------------| | PCA baseline | 0.00 | 0.00 | 0.00 | | Top-entry 1 | 0.020 | 0.015 | 0.013 | | Top-entry 2 | **0.026** | 0.029 | 0.023 | | Top-entry 3 | 0.022 | 0.015 | 0.014 | | BAMS | 0.02 | 0.0165 | 0.014 | | Amadeus | 0.014 | **0.05** | **0.05** | - (3) To expand on what the system can do we now have released a fully usable front end user interface App to Amadeus using GPT-3.5 with the three demo datasets used in the submitted manuscript (EPM, MABe, and MausHaus videos) and asked neuroscience behaviorists to test it – namely, we added example prompts they could run and a chat-interface to ask their own questions. We now collected extra prompts not generated by the authors for testing and we will include this in the revision. In brief, we sampled 30 naive user prompts at random out of the 362 submitted via our App (errors= ❌), and found a roughly 18% error rate with GPT3.5 (see more below for GPT4, which was only 10%); due to space limits, we show a few here: '✅ Give me the duration of time the animal engages with the object I picked and the events where the animal overlaps with the object I picked.', '❌ Perform hierarchical clustering and plot the dots with different colors based on their clusters.', "✅ Define <|sap|> as a behavior where the animal's body length elongates. Do you see sap in the epm?", "✅ Give me the tracking of all 3 animal's noses with each nose being shown in a different color.", From 362 queries 242 of which were from unique prompts (i.e., from user typed ones rather than executing the demos). 329 were automatically rephrased. We found Amadeus had reported 129 “errors”: 38 were caused by unsupported features or undefined variables, and were thus explained to the user, while 64 originated from programming errors. Therefore, from 362 queries there were an 18% programming error rate, and 11% unsupported feature requests rate. **Questions**: - **1 & 2** are hopefully answered above. Note that the Rephraser LLM requires no training and the difference between the code generator and Rephraser is the system prompt, which is used for LLMs to do in-context learning; we will be sure to add the Rephraser prompts in a revised version. - **3**, we note that of course there is a lot of concurrent work right now, but we did our best to put in context and cite papers that we saw that were closest to ours at the time of submission (even if they did not impact our own ideas), but if you have concrete examples we missed, please let us know! - **4**, this is why we semi-constrain with the API, but of course we collectively need to understand issues with LLMs. New work that includes GPTs to write executable code are being tested rapidly, and it seems that GPT-4 give less hallucinations than GPT-3.5 (Cai et al. 2023 & Wang et al. 2023). Since our system can easily switch from GPT-3.5 to GPT-4, the hallucinations could be reduced with the improvement of the underlying LLMs used. However, our Rephraser truly does attempt to write code for 90% (ie 329 of 362 from outside users) of the time, i.e., tests if code is runnable and otherwise parses errors and tries again. Therefore, we are happy to tone down any claims of “no coding ever,” but we do see this as a truly new way to interact with SOTA models (like SAM, etc), so we hope you agree this is still a useful demonstration for the field and neurips community. - **5**, we note this is still built on GPTs, so it’s not merely bound to our API. Our system prompts and API sets the ground rules to have it understand common behavioral analysis queries, but it doesn’t limit its capacity. As an additional test, we took 10 of the failed runs from external users and re-ran them with GPT4, which reduced errors to 10%, suggesting better LLMs also increase performance/our API is not too limiting. - **6**, see point 3 in weaknesses above, we have now asked expert mouse behaviorists to use the system. We hope this clarifies our novel contributions and improvements, and that you’d consider raising your rating. Thank you! --- Rebuttal Comment 1.1: Comment: Dear Reviewer XjKy, Are there any clarifications we can provide for you regarding our rebuttal? Thank you for your time and efforts in this busy period.
null
null
null
null
null
null
Scenario Diffusion: Controllable Driving Scenario Generation With Diffusion
Accept (poster)
Summary: This work adopts a conditional diffusion model to generate the abstract driving scene. Strengths: It has great potentials to generate future scenes with diffusion model. Weaknesses: (W1) No comparsion with existing works at all. It is significant to compare with previous methods to demonstrate the superiorty of the proposed method in terms of diversity, generalizability or other aspects. Quantitative comparisons with methods such as SceneGen [1], TrafficGen [2], CTG [3], and Scene Diffusion [4] would help to make the improvement more convincing and easier to recognize. Could the authors provide related results or briefly discuss why these methods were not included in the comparison? (W2) The proposed method aims to generate all agents simultaneously so that the relationships between them can be learned. Nevertheless, auto-regressive manner also embodies such functionality as it can condition on the generated agents. Have the authors explored the use of auto-regressive ways within the proposed framework? (W3) As a generation work, a great potential would be producing additional data for training, customizing safety critical scenes for evaluation. However, this part is missing from the current version, which means the claimed benefit is unjustified at all. Could the authors complement some experiments to support that? [1] SceneGen: Learning to Generate Realistic Traffic Scenes [2] TrafficGen: Learning to Generate Diverse and Realistic Traffic Scenarios [3] Guided Conditional Diffusion for Controllable Traffic Simulation [4] Generating Driving Scenes with Diffusion Technical Quality: 1 poor Clarity: 2 fair Questions for Authors: See weakness for details. I am mostly concerned that this work only decribe their method but do not compare with rich existing literatures at all. It is hard to tell whether the proposed method really works and is better than existing works. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 1 poor Presentation: 2 fair Contribution: 2 fair Limitations: No concern. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your detailed review! * (W1) We have tried to address in the global response why we did not provide a comparison to autoregressive models, and the difficulty of a metric that highlights the need for joint inference, rather than conditional inference. Our hope is that the additional figures in the attached PDF capture this property of our model. * (W2) We do not have a substantial and rigorous evaluation of auto-regressive models. Due to the factored nature of auto-regressive models, joint inference of vehicle states would be a challenge for these models. * (W3) We agree that experiments showing the applicability of our scenarios in, for instance, control validation, is the long term goal. However, it is not clear how we could add an adequate explanation of such experiments and the results within the space constraints. Questions: * In response to the comment that “it is hard to tell whether the proposed method really works”, we note that our MMD and other metrics are also used by the existing work in the literature such as TrafficGen and we believe are an appropriate basis for at least assessment of competence. --- Rebuttal Comment 1.1: Title: Response Comment: Thanks for the reviewers' replies. After discussing with other reviewers, I am still concerned about the experiments. As a new field, it is okay not to compare other works, as they might be concurrent or not fully open sourced. However, consensus is reached that the work should adequately demonstrate that the proposed method is helpful for downstream applications while the current experiments part fail to do so. Thus, my final rating is Borderline reject.
Summary: This paper proposes a novel approach to use latent diffusion for driving scenario generation. The generated driving scenario is defined as bounding boxes with associated trajectories. The proposed diffusion model is comprised of an autoencoder to encode the BEV image into latent space and a conditional UNet for diffusion modeling. To increase the controllability of the generated scenario, map conditioning and token conditioning are incorporated into the process. The experiments are conducted both on the large-scale internal dataset and the Argoverse 2 dataset, which validates the effectiveness of the proposed algorithm and its flexibility for controllable generation. Strengths: 1. The idea of applying latent diffusion to driving scenario generation is novel and interesting, as diffusion methods have shown great capability in generative tasks. 2. The proposed token conditioning mechanism brings more controllability and the possibility of customization for this task. 4. The overall writing and presentation of the paper are clear and easy to follow. Weaknesses: 1. For the metrics to evaluate the generated scenarios, it is very important to evaluate the validity of the driving scenario, but the paper only measures the rate of off-road trajectories. Other aspects like the rate of collisions between agents, smoothness and plausibility of the trajectories, and possible traffic rules violations should also be measured. 2. Although in Sec 4.3.2 some examples have shown that the proposed method is aware of the relationship between agents, using BEV encoded image to represent the scene without specific designs for agents interactions could not well capture the relationships between agents compared with those GNN-based methods shown in motion-forecasting. Having specific designs to explicitly model the possible interactions is important for the complex driving scenario generation. 4. There is no direct quantitative measurement of how the generated sample faithfully follows the conditioning agent and global token. 3. The goal of scenario generation is to facilitate the development of other autonomous driving tasks like motion forecasting or planning. Therefore, a better way to show the effectiveness and diversity of the generated scenario is by testing them on downstream tasks. Some simple experiments showing the performances of a motion forecasting model training on real and generated data would greatly improve the value of the proposed method. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. During the sampling process, would it be helpful to use techniques similar to classifier-free guidance to make the sample ground on the conditioning token more faithfully? 2. Currently the agent token still needs very specific values of different aspects to define its behavior, is it possible to use some more abstract behavior tokens to only specify its overall behavior like overtaking, speed up, right turn, etc. to promote better diversity without much engineering effort? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors have well discussed the limitations and the potential negative societal impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your detailed review! We have tried to address the comment about metrics in the global response. We respectfully disagree with the comment that “the paper only measures the rate of off-road trajectories”. We believe that the statistics reported in the different tables comparing the training and generated distributions are appropriate metrics that capture the performance of the model but we appreciate the suggestion for additional metrics. * We have tried to address the comment about agent interactions in the global response – we certainly agree that GNN-based methods are a very good way to model agent interactions, but we believe that the diffusion model has shown that it can capture interactions in a scalable manner. * We respectfully disagree with the statement that “there is no direct quantitative measurement of how the generated sample faithfully follows the conditioning agent and global token.” We believe that the MMD and EMD statistics exactly show those quantitative metrics, and Table 2 in particular shows the effect of removing the tokens: when every agent has a token, the paper states that the MMD approaches 0, and we see the results when the tokens are masked completely. * We agree that showing “the effectiveness and diversity of the generated scenario is by testing them on downstream tasks.” and downstream tasks are the ultimate purpose of our model but have not included these results. Our hope was that our experimental results describing the performance under partial tokenization (Figures 3 and 4, Table 2) and generalization across environments (Figure 5, Table 3) were more informative of the value of our model. Questions: * We very much agree that classifier-free guidance would be helpful, and we have explored using CFG to increase the strength of both map and token conditioning. We have also explored heuristic guidance functions to improve both agent box placement (to avoid overlapping boxes) and to improve trajectories (e.g. to avoid collisions, stay close to lane centers). However, we opted not to include that work in this paper to give more space to describe the essential parts of the approach (scene autoencoder, diffusion, token conditioning). * It is definitely possible to use more abstract tokens. We described explicit tokens as they were more obvious demonstrations and were easier to evaluate. The final position heading token feature described in Appendix C.3 is more abstract in that it describes the curvature of the future trajectory but does not directly describe the past trajectory or exact trajectory shape. As seen in Figures 1, 7, and 8 this allows the model to generate different trajectories that fit within the constraints described by the tokens. We have also explored using abstract motion categories (e.g. “lane change”, “turning”, “u-turn”, “stopping”) as token features. --- Rebuttal Comment 1.1: Title: Response to the rebuttal Comment: Thanks to the author for further clarifications. After reading the rebuttal and other reviews, my main concern is the evaluation part in terms of diverse and proper metrics for this task, possible baselines, and more ablations.
Summary: This paper tackles the problem of generating both the initial configuration and vehicle trajectories for driving scenarios given an input top-down map. It proposes a latent diffusion model that operates in the latent space of a learned autoencoder trained to decode vehicle boxes and trajectories. The model can be directly conditioned to encourage a subset of agents to adhere to user specifications. Experiments show that once trained on a large real-world driving dataset, the model can produce reasonable scenarios at multiple scales and allows for user control over a subset of agents. Strengths: The problem is important for AV testing and training. The proposed method simultaneously generates both an initial scene configuration and vehicle trajectories, instead of in two stages as in prior work. The proposed latent diffusion model is novel for vehicle trajectories – all prior work I have seen is explicit trajectory diffusion and the design of the autoencoder for such a model is not trivial. The model allows directly (optionally) conditioning on attributes of one or more agents which gives nice flexibility and allows generating variations of scenarios directly taken from e.g. a log or specified by a user. Results of generating short 4 sec scenarios look reasonable. Experiments show the ability to generate scenarios of various sizes on the sizable Argoverse 2 dataset. Fig 4 is a neat demo showing that the model is picking up on the semantics of the relationships between vehicles. The appendix is extensive and provides most of the necessary details to enable reproducibility. Weaknesses: There are a couple claims that seem over-stated and should be toned down or clarified: * Conditioning tokens are “more abstract features” (L208) that does not require “fully specifying box and trajectory for the agent” (L206) or “having observed such agents in the real world” (L43) – if I understand correctly, these tokens require a user to give the model the desired agent position, heading, length, width, speed, and sometimes curvature over time (L211), which is quite a low-level specification of the trajectory and likely will need to be taken from a log. Or do tokens describe the state only at a single $t=0$? If so, this should be made more clear, and I still would not say this is “abstract”. * The model “generalizes to different geographical regions” (L9) – Table 3 shows seemingly limited ability to generalize across regions with models trained specifically on each region far outperforming other single-region models. The model trained on all regions does reasonably well on each, but this is not generalizing across regions. Moreover, it would be helpful to add a metric that indicates similarity between regions (e.g. MMD between ground truth scenes in A and B) to put the numbers in context. While this experiment is informative about performance across region, it may be a stretch to say it demonstrates generalization across regions. Some limitations of the technical approach may make the proposed method less practically useful: * The agent tokens are not hard constraints and in results like Fig 3 the specified agents move around between samples and don’t exactly meet user specifications. A natural solution for diffusion would be to use some kind of test-time guidance as in [37] and [Trace and Pace, Rempe et al., CVPR 2023]. * The 4 sec time horizon (2 sec past + 2 sec future) seems quite short and limits the possible scenarios it can capture. * The conditioning requires a user to specify all attributes of an agent. But it seems with the masking strategy it should be possible to condition on partial information (e.g. specify the past trajectory but generate the future, or only specifying speed) which would greatly enhance the flexibility. Some additional evaluation metrics would help to evaluate realism and diversity. For example, MMD on speeds, collision rate between vehicles, and a diversity metric (like average pairwise distance between multiple samples). Additionally, some key experiments are missing: * The accuracy of the model’s controllability is never evaluated, which is the main claim of the paper. It would be good to report how closely generated agents follow the agent tokens specified by the user (e.g. L2 distance between positions, headings, speed, etc..). The 0% model in Table 2 kind of does this, but the reported MMD metric is not very intuitive. * A comparison to a baseline to justify the advantages a diffusion model to generate both static states and dynamic trajectories jointly. E.g. a comparison to TrafficGen [6] or a simpler baseline that only generates static states and then uses a trajectory prediction or traffic simulation model to roll out motion, e.g. [BITS: Bi-level Imitation for Traffic Simulation, Xu et al., ICRA 2023]. Or evaluating if the trajectories generated by the diffusion model align with those from a SOTA trajectory forecasting model? * It would be good to see an example of how these scenarios can be useful. For example, that training a driving policy on a dataset augmented with generated scenarios does better than just a real-world dataset. * More qualitative results (ideally videos, but additional figures in the supplement would also be helpful). Technical Quality: 2 fair Clarity: 3 good Questions for Authors: Overall, I am borderline on this paper. While the latent diffusion formulation is novel and technically sound and the model nicely generates both starting positions and motion for the scenario, the primary contribution of “controllability” is not sufficiently evaluated, and additional evaluations/baselines would strengthen the paper. In the rebuttal, I would really like the authors to clarify my confusion on the agent tokens and ideally evaluate the controllability of the model by showing how closely it follows user specifications. The method Sec 3 at times is rather high level and leaves some questions: * Sec 3.1: why is the encoder not conditioned on the map like in the typical CVAE formulation? * Sec 3.3: A lot of info about tokens are relegated to experiments in Sec 4.3.1 or the appendix but may be better to include in the methods. I’m still left wondering how exactly the cross attention works, e.g. how are the tokens associated with agents in the scene since there is no canonical agent IDs or ordering? Also, if tokens are specifying the properties of agent trajectories, why do multiple samples of the same scenario (e.g. Figs 1, 7, 8) exhibit varying trajectories for the specified agents? * How are the number of agents in the scene determined by the diffusion model? Through the agent probability? Why is the global scene token represented as the masking probability and not directly the number of agents that are desired in the scene? * The methods mentions 2 agent tokens are used, but then 3 are used in Fig 3 and N are used in Table 2, so it’s not clear what is the problem formulation and what’s used in practice. Another other small suggestion: since explicit trajectory diffusion models are becoming more prevalent [37] [Trace and Pace, Rempe et al., CVPR 2023] [Stochastic trajectory prediction via motion indeterminancy diffusion, Gu et al., CVPR 2022] [Planning with Diffusion for Flexible Behavior Synthesis, Janner et al., ICML 2022], it would be good to discuss them in related work and compare to the proposed latent diffusion. ===================== After Rebuttal ====================== After considering other reviews and discussion with the authors and reviewers, I have decided to keep my rating leaning towards reject. Given that the problem is relatively new and the latent diffusion approach is novel, I am willing to be lenient on the lack of a baseline comparison. Especially since TrafficGen (ICRA 2023) is concurrent work and SceneGen (CVPR 2021) did not release code. However, without a baseline, the experiments and metrics need to be comprehensive and convincing to show generated scenes are realistic and match user specifications; I don’t believe this is the case in the current draft. The rebuttal Table 1 adds some new map-based metrics, but important realism evaluations like collision rate, plausibility of trajectories (smoothness and comfort), and diversity are still missing. Moreover, the added evaluation of controllability in rebuttal Figure 3 is rather coarse (considered “correct” if within 2.2 m) and does not consider other user specifications like length, width, speed, and curvature. I also think it would be a strong addition to show results on higher-level semantic conditioning of the model such as vehicle type and maneuver type (e.g. lane change, turning, or u-turn) to justify the claim that the model can handle “abstract” specifications rather than just partial explicit specifications. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 3 good Contribution: 3 good Limitations: Limitations are sufficiently discussed in Sec 6. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your detailed review! We tried to address the question about additional metrics in the global response, but we agree that we did not provide sufficient detail for the comparison and the advantages of the diffusion model, and will add these to the final paper if accepted. We appreciate the guidance to be more precise about the claims of the paper. * It is correct that the tokens encode agent position, heading, length, width, and speed, and that these are only initial conditions. However, even with the curvature feature, we do not prescribe the exact path of the trajectory — the model produces variations from the partial description provided by agent tokens. The same architecture can be used with higher level descriptions. For example, we can give position using relatively coarse specifications, and use “bus” vs “car” tokens instead of the vehicle’s size, and the type of motion (e.g. lane change, turning, or u-turn). We have experiments with these higher level descriptions. * We would be grateful for clarification on the comments that our techniques do not generalize. While performance on each training set in Table 3 is definitely better than performance on a different test set, the differences are not significant in every case, e.g., Train/Test on B vs Train-on-B/Test-on-C and we believe this demonstrate some generalization across environments, The largest difference involves environment D, and we are careful in the paper to say that we do not generalize to D, and give specific examples of where and how the models are not generalizing. * The agent tokens are not hard constraints by design. We want the model to produce different variations following a partial description via tokens. As you suggest, we have used guidance to refine parts of the generated scenario (e.g. stay close to lane centers, avoid overlapping boxes) but those results were out of the scope of this paper. * We used 1-4 seconds of agent history as input and trajectory prediction to be consistent with the literature. * We have provided a precision-recall graph of the controllability in the attached PDF, see Figure 3 in the attachment. Our model achieves precision and recall well above 90%. * We have tried to address the question about comparison to existing techniques in the global response, but we agree that we did not provide sufficient detail for the comparison and the advantages of the diffusion model, and will add these to the final paper if accepted. * We agree that evaluation of the performance of downstream tasks using the generated scenarios is the ultimate purpose of our model but our hope was that our experimental results describing the performance under partial tokenization (Figures 3 and 4, Table 2) and generalization across environments (Figure 5, Table 3) were more informative of the detailed performance of our model. We have added additional qualitative figures in the PDF attachment, see Figures 1 and 2 in the PDF. Questions * In Sec 3.1, the map information is provided to the decoder to help the output trajectories be consistent with the map. Since the only outputs of the scene autoencoder are boxes and trajectories we do not need the latent embedding to include information about the map explicitly. Empirically when we tested concatenating the map image to the encoder inputs we saw no change in performance for the scene autoencoder. * In Sec 3.3, the cross-attention inside the denoising model is as described in the Latent Diffusion paper. Each pixel from the intermediate representation of the Unet is a query, and the keys/values are agent tokens. When agent tokens include position information the model learns to associate them with queries from the corresponding region of the image. * As you correctly point out, there is no fixed agent ID or ordering required. While we do not demonstrate it in this paper, the tokens can be flexible (e.g. multiple tokens describing different aspects of a single agent, a token describing multiple agents, etc) The agent tokens as we define them indicate “there’s an agent in this region with these properties” and the cross-attention updates the latent embeddings in that region to reflect this. * Multiple samples have different trajectories because the tokens provide only a partial description of agents. We do not want to exactly dictate the trajectory to allow the model to generate slight variations, hence the term “abstract features”, which we will replace. * Regarding the number of agents, if no tokens are used the model learns the distribution of scenes (including number of agents) conditioned on the map. This is measured by the Earth Mover’s Distance metric reported in Table 5. If agent tokens are used, the model still learns the distribution of scenes (including number of agents), but now is conditioned on the map, the number of agent tokens (since every agent token describes a unique agent), and the global scene token which provides information on how many additional agents there are. We experimented with various ways to represent the scene density as a token (e.g. masking probability, total number of agents, total number of agents divided by the drivable surface area) but found no clear advantages in terms of model performance. * Regarding the number of tokens, Section 3.3.2 says “we explore two types of tokens”. There can be any number of agent tokens. These tokens are used via cross-attention, which supports an arbitrary number of keys and values. We intentionally demonstrate this capability by showing examples with 2 and 3 agent tokens, and as you point out Table 3 requires using an arbitrary number of tokens. For demonstration purposes we found it better to limit the number of agent tokens to 2-3 so that the model had flexibility to fill in the rest of the scene. --- Rebuttal Comment 1.1: Title: Followup question Comment: Thank you to the authors for the detailed response and clarifications, I really appreciate it. The authors stated that they “have experiments with these higher level descriptions” like “bus” vs “car” tokens instead of the vehicle’s size, and the type of motion (e.g. lane change, turning, or u-turn).” Are these experiments included in the paper or supp? These would be exciting results and showing these examples would greatly strengthen the argument that the model can handle “abstract” specifications. Otherwise, please consider re-phrasing the description to be “partial” specification rather than “abstract”.
Summary: This paper presents a conditional latent diffusion model (LDM) for generating oriented BEV vehicle bounding boxes, each associated with a 4-second trajectory (2s past, 2s future) at a 1s temporal resolution. The autoencoder component of this LDM uses an object detection architecture similar to CenterNet, with rasterized agent inputs. The diffusion model is conditionable on the local map, a global traffic density parameter, and an optional per-agent attribute descriptor. The model is validated on an internal proprietary dataset and Argoverse 2 via a series of ablation studies. Strengths: What stands out in this work is the overall simplicity and elegance of the framework. It draws on ideas from different communities (generative modeling, object detection, traffic simulation). It also includes interesting original ideas that are potentially applicable to other generative modeling tasks in robotics and self-driving, such as (1) the imbalanced VAE (raster input, vector output); and (2) the masking idea to make agent-level conditioning optional. The writing is clear and mostly easy to follow. Weaknesses: The key weakness of this work is the lack of any baselines in the experimental analysis. Note that this is not a completely novel task (Section 5 discusses prior techniques). While it is true that not all prior work may be conditionable on per-agent attributes like the proposed model, I believe a comparison to prior work (potentially in the setting without agent token conditioning, as in the last column of Table 2) is warranted to meaningfully understand how well diffusion models perform this task. Secondly, the diffusion model architecture is hard to understand from the main paper in its current form, and a clear description is only available in the supplementary. For example, L_{KL} is marked in Fig. 2a but not described at any point in the main text. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: 1. Would it be possible to include one heuristic and one autoregressive approach (as mentioned in Section 5) for the Argoverse dataset as additional columns in Table 2? This is the main limitation of the paper from my perspective, and I am happy to raise my score if it is addressed, or a convincing reason for why such baselines are not useful is provided. 2. Since the conditioning (controlability) is pitched as a key contribution, this should be clearly understandable from Fig. 2b. Would it be possible to expand “M” to show both its encoder and decoder in Fig. 2, with more clarity on how the different inputs/conditions are passed into the diffusion model architecture, instead of a single input arrow? Minor: 1. I couldn’t understand the caption in Table 3, specifically the text enclosed in brackets 2. [34] and [37] have incorrect citation “year” fields (2022 → 2023) Update: Thanks a lot for your clarifications and the efforts made to update the draft. While I no longer have any issues regarding the method/presentation, I believe the rebuttal does not fully address the concern regarding limited baselines in the analysis of the proposed model. The other reviewers have also commented on the insufficiency of the current experiments to demonstrate a high degree of realism and controllability with the proposed model, and I concur. Therefore, I am maintaining my rating of borderline reject. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: The key limitations (such as the currently limited generalization capability and simple task definition) are discussed in Section 6. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your detailed review! We have tried to address the question about comparison to existing techniques in the global response, but we agree that we did not provide sufficient detail for the comparison and the advantages of the diffusion model, and will add these to the final paper if accepted. We can definitely clarify the diffusion architecture – the $L_{KL}$ regularization loss in Fig. 2a was unfortunately only described in the supplementary material – we apologise for that oversight. Questions: * The suggestion of a comparison to an autoregressive model and a heuristic as additional columns in Table 2 is extremely helpful – thank you for that suggestion. However, we hope that our explanation in the global response provides insight as to why we did not provide the explicit comparison to an autoregressive model in the paper. * The suggestion to expand “M” to show both its encoder and decoder in Fig. 2 is also very helpful and we will make that change to the final paper if accepted. It may be helpful to imagine the denoising module “M” being expanded to show something similar to Figure 3 from [Latent Diffusion](http://arxiv.org/abs/2112.10752). Thank you for the corrections – the text in the parentheses in Table 3 was meant to be an internal comment in the source latex. --- Rebuttal Comment 1.1: Comment: Thanks a lot for your clarifications and the efforts made to update the draft. While I no longer have any issues regarding the method/presentation, I believe the rebuttal does not fully address the concern regarding limited baselines in the analysis of the proposed model. The other reviewers have also commented on the insufficiency of the current experiments to demonstrate a high degree of realism and controllability with the proposed model, and I concur. Therefore, I am maintaining my rating of borderline reject.
Rebuttal 1: Rebuttal: We appreciate the reviewers thoughtful and detailed comments, and agree with the majority of the comments and suggestions. In terms of the overall identified weaknesses, the reviewers’ concerns can be roughly grouped into: * Absence of comparisons to previous work, especially previous autoregressive models * Limited quantitative evaluations or metrics. (along with additional comments from individual reviewers). We believe that these two concerns are linked, and get to the heart of what we were attempting to show in this paper. One of the challenges of urban driving is that the correlations between the agents are often tightly coupled and cannot be easily factored in scenarios where the agents are forced to interact with each other. For instance, creating a scenario where another vehicle is pulling out in front of our AV cannot easily be generated by an autoregressive conditional model — both our AV and the other vehicle must be placed simultaneously with respect to each other in order to force the interaction. We show our ability to generate this scenario in Figure 1 of our attached PDF. This is not to say that an autoregressive model cannot be coerced into a limited set of pairwise or joint interactions by regressing on combinations of vehicles, but learning such a model is non-trivial and will not scale as easily as the joint model produced by diffusion. It is worth noting that the previous results such as TrafficGen do not report scenarios that require *specific* forms of interaction — for instance, the interaction results in the TrafficGen paper report interaction metrics as collisions, rather than rate of inter-vehicle responses. Rather than show this difference experimentally, we attempted to articulate within the text (in the introduction and in section 4.3.2) a principled reason for a diffusion model, and that autoregressive models are by design limited to locally factored representations. We agree that our metrics and evaluations do not necessarily highlight this capability relative to autoregressive models, but it is also not clear how convincing a metric comparison would be. The advantage of diffusion is the ability to learn a high-dimensional joint distribution, capturing many agent correlations as in Figure 4 of our paper. Our best attempt at a metric was the distributional comparisons in tables 1, 2 and 3. We will add additional metrics proposed by the reviewers such as rate of collisions between agents and traffic rules violations, and have added additional metrics in Table 1 of the attached PDF. We agree that the paper was not clear on the advantages of a joint distribution inferred by diffusion, and that such a model can capture correlations that an autoregressive model does not easily. We will revise the text to better reflect the purpose of diffusion, and show clearer examples. Pdf: /pdf/f3e59cf2e3cb9dfcafedaf91e55816e2fa73399f.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Generalization bounds for neural ordinary differential equations and deep residual networks
Accept (poster)
Summary: 【Post-rebuttal Comments】 I thank the authors for the discussions after the authors' rebuttal. My questions are appropriately answered. So, I want to keep my score and vote for acceptance. 【Original Comments】 This paper evaluates the generalization performance of the class of functions represented as solutions of time-dependent parametrized ODEs (including time-dependent Neural ODEs) and their discretized variants, that is, Residual Network, that smoothly changes weight parameters with respect to layer index. The obtained generalization bound is $O(1/n^4)$ for the general case and $O(1/n^2)$ for the time-independent case (for sample size $n$.) Numerical experiments were conducted to examine the relationship between the smoothness of weights between successive layers and generalization performance. Also, a learning method was proposed to add the difference in weights between successive layers as a regularization. Strengths: - The derived results are consistent with classical statistical learning theory. Specifically, setting $K_\Theta=0$ yields the classical $O(1/n^2)$ rate. - Numerical experiments verify the relationship between the smoothness of weights and generalization performance. These results align with the theory. - Comparisons with existing results (Bartlett et al. (2017), Golowich et al. (2018)) are carefully considered. In particular, this study is novel in that it provides generalization bounds for a class of depth-independent models to which Golowich et al. (2018) are not applicable. - The paper is well-written. The organization and mathematical description of the paper are appropriate, and it was easy to understand the paper's main point. Weaknesses: - To the best of my knowledge, it is rare to impose smoothness between the weights of successive layers in a discretized ResNet model. In addition, the prediction accuracy of those models has yet to be confirmed to be sufficient. Therefore, it is difficult to say that this paper provides a theoretical guarantee for practical models. Technical Quality: 3 good Clarity: 3 good Questions for Authors: In l.84--91, this paper refers to existing studies of the generalization bound of continuous-time NNs. However, their relationship with this study has yet to be discussed extensively. Is the novelty of this compared with them that the former is a time-independent ODE while the latter is a time-dependent one? Also, is there any study that dealt with time-independent ODEs? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Limitations are discussed in the experiment section (l.338) and conclusion section (l.346.) Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > To the best of my knowledge, it is rare to impose smoothness between the weights of successive layers in a discretized ResNet model. In addition, the prediction accuracy of those models has yet to be confirmed to be sufficient. We agree that imposing smoothness between successive layers is not a very common setup in residual networks, although it is the right framework for residual networks to approximate neural ODEs, which are commonly used models. We leave for future work the task of evaluating the prediction accuracy on real-world data of residual networks in this setting. > In l.84--91, this paper refers to existing studies of the generalization bound of continuous-time NNs. However, their relationship with this study has yet to be discussed extensively. Is the novelty of this compared with them that the former is a time-independent ODE while the latter is a time-dependent one? Two of the three papers cited in lines 84--91 tackle recurrent neural networks, which handle time series, contrarily to our residual networks that handles vectors. The time series setup requires significantly different models and analysis. The last paper answers a separate (although related) question regarding the generalization across environments, and only gives a generalization bound for linear ODEs. We will make the comparison with these works more detailed. > Also, is there any study that dealt with time-independent ODEs? We were made aware after submission time of another work (reference below) that shows a generalization bound for parametrized ODEs for manifold learning, which applies in particular for neural ODEs. Their proof technique bears some similarities with ours, but the model (time-independent versus time-dependent ODEs), task (unsupervised manifold learning versus supervised learning), as well as the absence of connection with residual networks, differ from our approach. We will of course include the reference and discuss differences in the next version of the paper. J. Hanson and M. Raginsky. Fitting an immersed submanifold to data via sussmann’s orbit theorem. In 2022 IEEE 61st Conference on Decision and Control (CDC), pages 5323–5328, 2022 We are not aware of any other generalization bound for neural ODEs, be it in the time-dependent or time-independent setting. --- Rebuttal Comment 1.1: Comment: I thank the authors for considering my comments and responding to them. **Smoothness between successive layers** I agree with the authors that while the smoothness assumption between succcessive layers is uncommon, it is natural when we interpret ResNet as a discretization of Neural ODE. I also agree that how the smoothness assumption incurs the prediction performances. Since generalization is not an issue, at least theoretically, the problem may lie in expressive power or optimization. **Relation to prior work on continuous-time NNs** OK **Relation to prior work on time-independent ODEs** OK
Summary: The authors present a generalization bound for a large class of ODEs in this work. They connect ODEs to residual architectures to control the generalization with the differences between weight matrices. Strengths: The authors present a theoretical bound, which is something worth highlighting when everybody is just building ad hoc pipelines and then testing them against some benchmark. From this point of view, this work represents a clear advance over other results. Weaknesses: Even though the results are strong, the consequences and how they can be used are not done in a way that can help the reader to fully understand the applications of these results. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: Have you measured how the bounds fare in time series problems? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 3 good Contribution: 4 excellent Limitations: None. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We consider in this paper a residual network model that is not adapted for time series: the input is a vector $x \in \mathbb{R}^d$ that is used on the first layer of the network. On the contrary, models for time series typically input data at each layer, corresponding to a new time step. The time series setup requires different models (such as ODE-like RNNs, which are mentioned in the related work section), which are not in the scope of the current paper. We will specify this more clearly in the related work section where we mention RNNs, to avoid inducing any confusion.
Summary: The paper explores the generalization ability of neural ordinary differential equations and deep residual networks through Lipschitz-based complexity arguments. The bound, specifically for the discretized version, involves the maximum magnitude of the difference between successive weight matrices, which is not commonly seen. The paper also uses numerical experiments to investigate how this quantity relates to the generalization capability of deep residual networks. The generalization gap is smaller when the quantity is smaller, but adding the quantity as a penalization term did not improve the prediction performance. Strengths: [originality] - To the best of my knowledge, this paper is the first to present a generalization error bound for neural ODEs. The proof techniques used in this work are rather standard, but they are combined in new ways to derive the results. - The potential importance of the Lipschitzness of the weights (i.e., the maximal magnitude of the differences between successive weight matrices) is newly proposed. [quality] - The analysis and results seem reasonable, and the mathematical setup is adequately explained. [clarity] - The paper is well-organized and articulately written. Its concise abstract enhances the paper's accessibility and ease of comprehension. [significance] - Neural ODEs and deep residual networks are now among the standard tools in the machine learning community. Therefore, theoretical results that advance our understanding of the behavior of these models are relevant and can improve their reliability for use in various applications. - The analysis in Theorem 1 reveals a term whose convergence rate is O(n^{-1/4}). Corollary 1 exemplifies that the slow-rate term can be eliminated by making the coefficient parameter time-independent in the infinite-dimensional case, partly elucidating the consequences of having time-dependent component in the model. Weaknesses: [originality] - None in particular [quality] - The paper would benefit from additional citations, particularly where named theorems are mentioned in the main text. For instance, it would be helpful to add citations to the Picard-Lindelof theorem at Line 59 and to Gronwall's inequality at Line 166 for the convenience of the reader. [clarity] - None in particular [significance] - None in particular Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: Have you conducted any experiments of Figure 1b using the maximum max-norm of the weights as a penalty term (or a sum of the max-norm of the weights) instead of the squared sum of Frobenius norms? While I acknowledge the equivalence between matrix norms, and although it may not be strictly necessary for the paper, I believe that readers (including myself) would be interested in seeing whether the behavior is different when the quantity that appeared directly in deriving the theoretical guarantee is used. [minor suggestions] - Line 112 “takes values into” → “takes values in” - Line 285 “Corollary 2 (of Theorem 1.1 of Bartlett et al. (2017))” → “Corollary 2 (corollary of Theorem 1.1 of Bartlett et al. (2017))” - Line 319 “a.k.a.” → “i.e.,” may be more appropriate? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: None in particular Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > The paper would benefit from additional citations, particularly where named theorems are mentioned in the main text. We will add citations for the Picard-Lindelöf theorem (actually the citation is present in the appendix but not in the main paper) and Grönwall’s inequality. > Have you conducted any experiments of Figure 1b using the maximum max-norm of the weights as a penalty term (or a sum of the max-norm of the weights) instead of the squared sum of Frobenius norms? We had not previously conducted an experiment with the max-norm penalization, because we thought that this penalization may be too irregular, in the sense that, at any one step of the backpropagation, it only impacts the maximum weights and not the others. Nevertheless, this is a very relevant remark, and we performed the suggested experiment. The results are mixed: we observe a similar effect of the regularization on the generalization gap as in the paper when using the $L_2$ norm of the max-norm of the weights. On the other hand, penalizing by the maximum max-norm of the weights does not have an effect on the generalization gap. We interpret this last result as a consequence of the fact that the maximum max-norm is too irregular (it only acts on two scalar weights at each step) to be used in practice. We will report these results and discussion in the next version of the paper. The minor suggestions are well noted. --- Rebuttal Comment 1.1: Title: Thank you for the response Comment: I have read the authors' responses, and points have been noted. I have no further questions, and my evaluation of the paper did not change.
Summary: The paper provides a generalization bound for the large class of time-dependent and time-independent neural ODEs. In addition, by leveraging on the connection between neural ODEs and deep residual networks, the paper provides a depth-independent generalization bound for the class of deep residual networks. The bound is compared with some earlier results, showing its novelty. The paper introduces a novel way of controlling the statistical complexity of neural networks and the bound depends on the magnitude of the difference between successive weight matrices. Numerical experiments are provided to show the relationship between this bound and the generalization ability of neural networks. Strengths: The paper provides a generalization bound for the large class of time-dependent, time-independent neural ODEs and deep residual networks. The bound is compared with some earlier results, showing its novelty. The paper introduces a novel way of controlling the statistical complexity of neural networks and the bound depends on the magnitude of the difference between successive weight matrices. Numerical experiments are provided to show the relationship between this bound and the generalization ability of neural networks. Weaknesses: Some expressions are not very clear, and some results have too strong conditions. Please see Questions below. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: 1. In Proposition 2 and Theorem 1, what do $R_X$ and $R_Y$ mean? The reviewer guesses that $R_X$ is the bound of the set of initial values $x$, but these two notations as well as $X$ and $Y$ do not have clear definitions. 2. A clear definition of $n$ in Theorem 1 is missing. If the data is any $n$ samples, the conclusion of Theorem 1 is obviously wrong. 3. The condition in Corollary 1 is too strong. The reviewer think Neural ODE should contains at least one hidden layer. And the reviewer think that the generalization of Neural ODE with one hidden layer can be obtained by using Theorem 1, by studying the Lipschitz constant of $\sigma(Wx)$. 4. The reviewer thinks that some existing definitions or symbols, such as covering number, subGaussian, should be briefly explained. 5. The numerical experiments show the correlation between the generalization gap and the maximum Lipschitz constant of the weight. However, the generalization gap proved in this paper mainly depends on the number of samples $n$. Could the paper provide experiments to show the correlation between these two quantities as well as the convergence order? 6. Some abnormal result points can be found in the experimental results, such as the point in the bottom left corner of Figure 1. Is there any explanation for this? 7. Theory and numerical experiments show small Lipschitz constant of the weights reduce the generalization gap. Does this indicate that time independent coefficients ($K_{\Theta}=0$?) have better generalization? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 4 excellent Limitations: The authors have adequately addressed the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: 1. The notations $X$, $Y$, $R_X$ and $R_Y$ are defined in Section 3.1 (lines 111-113). 2. $n$ is indeed the sample size (defined in Section 3.1, line 111), and the training sample is drawn i.i.d. from the same distribution as the test sample (as specified in Section 3.1). Does this answer your concern on Theorem 1? If not, we would be very interested if you could elaborate on your statement. 3. Our parameterized ODE model (5) does not include the case where there are weights inside the non-linearity, since we assume the dynamics at time $t$ to be linear with respect to the parameters (which still makes the input-output mapping highly nonlinear, see Section 3.2, lines 150-154). As a consequence, the extension to the case you mention is non trivial, and we agree it would be very interesting. We leave it for future work and will mention this possible extension in the conclusion of the paper. 4. The covering number is informally defined on lines 172-174, we will add a more formal definition, as well as the definition of sub-Gaussianity. 5. Following your suggestion, we performed some experiments varying the sample size $n$. We observe a smaller generalization gap when increasing $n$, as expected by the theory. Unfortunately, it is difficult to say more and in particular to report a convergence rate because of a large amount of noise in the experiments (due to the data splitting used to vary the sample size and to the optimization algorithm). 6. The points in the bottom left corner of Figure 1a correspond to a very small number of training epochs (typically equal to 1). At this early stage in the training process, the model is underfitted, and the generalization gap is very negative. 7. Your statement is correct: we do observe that time independent coefficients have better generalization (however at the cost of a less expressive model). We will report this in the experiments by adding data points corresponding to time-independent coefficients (which can be thought of as taking $\lambda$ to infinity in Figure 1b). --- Rebuttal Comment 1.1: Comment: According to the author's response and my reading of the proof, my question has been effectively addressed. However, I would like to suggest adding rigorous definition of the symbols (e.g., $n$) in the theorems. --- Reply to Comment 1.1.1: Comment: Thank you again for your review and additional suggestion. In the next version, we will recall the definitions before the theorems or refer to Section 3.1 to remove any ambiguity.
Rebuttal 1: Rebuttal: Dear reviewers, We warmly thank you for your time and relevant comments, which will help us improve our work. If accepted, we intend to take into account your suggestions, making use of the additional page. We answer the specifics of questions pointed out by the reviewers in individual responses. Thank you again, Sincerely, The authors
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Accelerating Reinforcement Learning with Value-Conditional State Entropy Exploration
Accept (poster)
Summary: This paper addresses the problem of sample-efficient exploration in sparse-rewards deep reinforcement learning. It builds on previous literature on the maximization of the state entropy as an exploration objective, which was mostly used in reward-free settings, proposing a value-conditioned state entropy objective along with the task reward. The latter pushes the agent to maximize the state entropy while penalizing the entropy over the estimated values of the visited states. This value-conditioned state entropy is implemented through an off-the-shelf non-parametric conditional entropy estimator. Finally, the paper compares the performance of value-conditioned entropy maximization against standard entropy maximization in MiniGrid, DeepMind Control, and Meta-World. Strengths: - The paper proposes an intrinsic reward can be easily incorporated in any previous method; - The experiments show that the value-conditional objective brings improved/matching performance in a variety of domains; - The paper clearly presents ideas and contribution, and also include some compelling visualization of the conditional entropy estimation. Weaknesses: - The intrinsic reward computation requires robust estimates of the value function, which is a challenge per se, especially in sparse-rewards domains; - As mentioned by the authors in their conclusion, the theoretical ground for value-conditional state entropy is unclear. Technical Quality: 3 good Clarity: 3 good Questions for Authors: This paper advocates for using state entropy bonuses to accelerate RL in sparse-rewards tasks, instead of training an entropy maximizing policy in reward-free settings, which is the most common mode of previous works. To adapt state entropy bonuses to the demand of the sparse-reward task, which requires to deal with the exploration-exploitation trade-off, the paper proposes to condition the entropy of the visited states with the value of those states, effectively driving exploration towards rewarding regions of the state space. This interesting notion leads to a methodology that can be easily incorporated in previous algorithms while benefitting the resulting sample efficiency in a variety of domains. However, the paper does not provide a formal theoretical understanding of the proposed approach, which leaves some doubts on the generality of their results. Anyway, I am currently providing a slightly positive score while encouraging the authors to improve the theoretical ground for value-conditioned entropy bonuses, and especially how the estimation error of the value function impacts the learning process. **(Clarification on the objective)** From my understanding of Algorithm 1, the proposed method maximizes the value-conditional state entropy over the samples taken from the replay buffer, and not just the samples drawn with the latter policy. While this is common in previous works as well (e.g., Liu and Abbeel, 2021), I am wondering whether looking at the value-conditional entropy of the latest samples is more appropriate in a setting where external reward is also present. Can the authors discuss their implementation choice and comment pros and cons of the two alternatives? **(Sensitivity to $\beta$ also for VCSE)** The experiments report an interesting analysis on how the value of $\beta$ affects the performance of A2C+SE. It would be interesting to inspect also the sensitivity of VCSE to the corresponding $\beta$ value. **(Comparison with CEM)** A recent work (Yang and Spaan, CEM: Constrained entropy maximization for task-agnostic safe exploration, 2023) considers state entropy maximization under cost constraints. I am wondering whether their approach could be also used to address the problem of reducing task-irrelevant exploration, e.g., by introducing appropriate value constraints. Can the authors discuss the comparison between their solution and CEM, possibly highlighting pros and cons of the two? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: Some limitations of the paper and the proposed methodology are reported in the first paragraph of Section 6. I believe that a formal discussion on how the value estimation error affects their method would be extremely valuable. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer FYPf, We sincerely appreciate your efforts and insightful comments to improve the draft. We respond to each of your comments one-by-one in what follows. --- **Q1. Importance of value estimation** **A1.** In Figure 7(a) of the original draft, we reported that the performance further improves when using the ground-truth value estimated with privilege information from the simulator. As you mentioned, this implies that the performance of our method can indeed depend on the quality of learned value functions. Further quantitative evaluation on the relation between the quality of value functions and the performance of our method is indeed an interesting question, but we leave it to future work as measuring the quality of value functions is an open problem. It would be also very appreciated if the reviewers might perhaps be able to suggest experimental designs for this analysis experiment. -------- **Q2. Discussion on the pros and cons of maximizing value-conditional state entropy using the samples from a replay buffer** **A2.** This is an interesting question. *The benefit of maximizing the entropy using the samples from a replay buffer* is that it explicitly encourages the policy to visit unseen states which are not in a replay buffer yet. This is more aligned with a supervised RL setup where the goal of exploration is supporting the policy to find a novel state with a high extrinsic reward. We adopted this buffer-based strategy in our work because our method is designed for the supervised RL setup. On the other hand, *maximizing the entropy of a policy-induced state distribution* is beneficial in case we want to learn a policy that can visit diverse states. For instance, this can be particularly useful in an unsupervised RL setup where the goal is to train a policy that can be quickly adapted to solve the downstream task when the reward becomes available. We will include the relevant discussion in the final draft. -------- **Q3. Analysis on the effect of $\beta$.** **A3.** Following your suggestion, we conducted additional experiments with varying $\beta$ for VCSE. As shown in Figure 2 available from the attached PDF in the general response, we find that A2C+VCSE consistently improves A2C, in contrast to A2C+SE which fails to significantly outperform A2C even with different $\beta$ values. -------- **Q4. comparison with CEM** **A4.** Thanks for your suggestion to include additional discussion on relevant work [Yang and Spaan, 2023]. As you mentioned, one can think of an idea to discourage the exploration in task-irrelevant state space by introducing a value constraint similar to CEM that introduces a safety constraint. This idea can be also useful in our setup in that it can introduce a scheduling for value constraint so that it can first encourage the agent to visit all the states and thus focus on exploration around high-value regions. However, we would like to note that VCSE is based on a different motivation in that VCSE aims to prevent the states with different high- (or low-) values from affecting exploration around low- (or high-) value states, not to discourage exploration on specific state regions. Moreover, we note that a mechanism to similar value constraint scheduling can be observed when the environment is sparsely-rewarded because VCSE acts the same as SE until it observes a sparse reward. We will include the relevant discussion in the final draft. We will include the relevant discussion in the final draft. [Yang and Spaan., 2023] Yang, Qisong, and Matthijs TJ Spaan. "CEM: Constrained Entropy Maximization for Task-Agnostic Safe Exploration." The Thirty-Seventh AAAI Conference on Artificial Intelligence. 2023. --- Rebuttal Comment 1.1: Title: After response Comment: I want to thank the authors for their detailed response, which is properly addressing my concerns. I am updating my evaluation upwards, and I really encourage the authors to include some bits of the provided clarifications and additional sensitivity analysis to a final draft of the paper. As for how to evaluate the impact of approximate value functions. I think the best way to do so would be through a theoretical analysis that shows how the value function error propagates to the objective function. Of course, this might be non-trivial. In terms of experiments, it would be great to compare the performance of VCSE with a learned value function against "fixed" value approximators, which I guess would derail learning completely when the quality of the approximation is not good enough. Also, an ablation study on timescale separation between value learning and policy learning would be interesting. --- Reply to Comment 1.1.1: Comment: Dear reviewer FYPf, Thank you for your reply! We will make sure that our clarifications and additional experiments are fully incorporated into the final draft. Moreover, we would really appreciate your suggestions on additional experiments with fixed value functions and separated actor-critic learning phases. We will try our best to design a more concrete experimental setup and include the results in the final draft. Thank you very much, Authors
Summary: This paper present a exploration technique that maximizes the value-conditional state entropy, which separately estimates the state entropies that are conditioned on the value estimates of each state, then maximizes their average. By only considering the visited states with similar value estimates for computing the intrinsic bonus, it prevents the distribution of low-value states from affecting exploration around high-value states, and vice versa. The experiments demonstrate that the proposed alternative to the state entropy baseline significantly accelerates various reinforcement learning algorithms across a variety of tasks within MiniGrid, DeepMind Control Suite, and Meta-World benchmarks. Strengths: (1) The exploration method proposed in this paper show that maximum value-conditional state entropy (VCSE) exploration successfully accelerates the training of RL algorithms. It can be used for reference in other RL algorithms. (2) For section 3, the development process of the entropy estimator is very clearly described. (3) For section 5, extensive experiments have been conducted to demonstrate the effectiveness of the algorithm proposed in this paper. Weaknesses: (1) The writing of this article seems difficult to understand in some aspects,especially for certain formulas. (2) The motivation of this article is not fully explained. (3) To be honest,it lacks a certain degree of novelty. As is well known, the methods used in this paper like “k-nearest neighbor entropy estimator” and “KSG conditional entropy estimator” are all previous work. We just utilize some of them. (4) The main body of this paper is clear to understand, here is space for improvement. I defer some of my issues in the appendix to "Questions". Technical Quality: 3 good Clarity: 3 good Questions for Authors: (1) In line 117, what is the meaning of z and z’? (2) In Equ.(1),what is the meaning of dx ? It seems unexplained. (3) In section 5, the selected experimental environment is relatively simple with a low dimension. Can we select some complex environments with a high-dimension state and action spaces for verification? (4) In Fig1, how to do “State Norm” and “Value Norm”? (5) In Fig1, why do you only choose the third one(kNN (k=3))instead of the others? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: In section 6, the authors talk about some limitation and future directions about this paper. At present, the potential negative social impact of this work has not been identified. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer pL2M, We sincerely appreciate your efforts and insightful comments to improve the draft. We respond to each of your comments one-by-one in what follows. --- **Q1. Motivation and Novelty** **A1.** In this work, we aim to improve the sample-efficiency of deep RL algorithms by introducing a novel exploration technique that can address the problem of balancing exploration and exploitation, which is an important and challenging problem for deep RL as highlighted by Reviewer MwTR and FYPf. Specifically, we propose a new exploration technique that builds upon a widely-used state entropy (SE) exploration by pointing out that SE suffers from an imbalance between the distributions of high-value and low-value states and introducing a new objective that takes into account the value estimates of states. We would also like to emphasize that our key novelty lies in introducing a new objective that maximizes the value-conditional state entropy $H(S|V)$ instead of the state entropy $H(S)$. To the best of our knowledge, our work is first to propose this objective and demonstrate its effectiveness. Moreover, we would like to note that which estimator to use is one of our design choices and not the main focus of this work. As Reviewer MwTR mentioned in their review, any entropy/mutual information estimator is compatible with our method. We will try to further clarify our motivation and novelty in the final draft. -------- **Q2. What is the meaning of $z$ and $z’$ in Line 117?** **A2.** $z$ and $z’$ are arbitrary two samples from a random variable Z = (X, Y), which is a notation used for introducing the concept of the maximum norm. Thanks for pointing this out and we will clarify this in the final draft. -------- **Q3. The meaning of $d_{X}$ in Equation 1.** **A3.** $d_{X}$ is the dimensionality of a random variable $X$. Similarly, $d_{S}$ and $d_{Y}$ are the dimensionalities of random variables $S$ and $Y$, respectively. Thanks for pointing this out and we will fix this in the final draft. -------- **Q4. Additional experiments on more complex environments** **A4.** We would like to emphasize that our experiments are conducted in environments where the observation and action spaces are high-dimensional. For instance, we reported main experimental results on pixel-based environments where the inputs to the agents are high-dimensional multi-channel inputs. Moreover, we note that our method is effective on Meta-World environments where the robot arm can freely move inside the wide robot workspace so that state-action space becomes very high-dimensional. Nonetheless, to further address your concern, we provide additional experimental results on the Quadruped Walk task which has more high-dimensional state/action spaces compared to other considered tasks like Hopper. As shown in Figure 1 available from the attached PDF in the general response, we find that our VCSE clearly outperforms SE especially in the initial phase of training, which implies that the effectiveness of our method is consistent on even more complex tasks. We will include relevant results in the final draft. -------- **Q5. How to compute state and value norms in Figure 1?** **A5.** To compute the state norm in Figure 1, we randomly sample states from a replay buffer and compute the Euclidean norm between states. For computing the value norm, we pass these states through the critic function to get value estimates, and compute the Euclidean norm between value estimates. We will add a more detailed explanation on the caption for Figure 1 in the final draft. -------- **Q6. In Figure 1, why do you only choose the third one(kNN (k=3))instead of the others?** **A6.** We note that $k$ is a hyperparameter in our method and basically we can use any $k$ for our method. We choose $k=3$ in Figure 1 for illustrative purposes. We will clarify this in the final draft. --- Rebuttal Comment 1.1: Comment: In this round of feedback, the author provided detailed explanations and modifications to the questions I raised. Especially the description of novelty and innovation can enhance the overall understanding of the article, and it is recommended to supplement it in the relevant parts of the article. In terms of expanding the experiment, in order to dispel my doubts, the author added relevant experiments to support it, which makes the article more competitive. Finally, I suggest that the author further sort out the entire article to make its context clear. Therefore, I agree to increase the article by 1 points. --- Reply to Comment 1.1.1: Comment: Dear reviewer pL2M, Thank you for your reply! We'll make sure that our additional clarification, experiments, and editorial comments are fully incorporated into the final draft. Thank you very much, Authors
Summary: The paper proposes an improvement over a popular intrinsic reward based on state entropy. Instead of encouraging large state entropy uniformly over all states which may deviate the policy toward failure states, the proposed intrinsic reward motivates the agent to maximize the value-conditioned state entropy. The value-conditioned state entropy is estimated with a classical kNN method. Strengths: The intrinsic reward is an important direction of improving RL exploration and sample efficiency, and how to balance between instrinsic and extrinsic reward is a challenge for intrinsic reward design. This paper proposes a novel method to achieve this goal --- conditioning the state entropy on the state value to partition the state space and focus on state entropy over states with similar value. The paper is well written and the proposed method is strongly supported by a large range of experiments. Weaknesses: There is no significant weakness that I notice except for an ablation study on the effect of k for the kNN. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: Besides the effect of k, I wonder how if authors have tried other entropy/mutual information estimator (for example, the one based on contrastive learning) and why ending up selecting the kNN one. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 4 excellent Limitations: kNN method for entropy estimation usually suffers in high-dimensional space, but the proposed method works well in environments with visual observations. So I don't find any significantly limitation. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer MwTR, We sincerely appreciate your efforts and insightful comments to improve the draft. We respond to each of your comments one-by-one in what follows. --- **Q1. Analysis on the effect of $k$** **A1.** Following your suggestion, we conducted additional experiments with $k \in \{3, 5, 7\}$ and observed that the performance of our method tends to degrade as $k$ increases As shown in Figure 3 available from the attached PDF in the general response, We hypothesize this is because higher $k$ leads to finding less similar states and this often leads to increased intrinsic reward scale. We will include the relevant results in the final draft. --- **Q2. Other entropy/mutual information estimator?** **A2.** It’s an interesting question. We employed a $k$-NN estimator following prior work, mainly because it’s non-parametric so that it does not incur additional training cost of training another neural network based estimator. But investigating the possibility of leveraging other estimators for online RL is definitely an interesting future direction and we will include the relevant discussion in the final draft. Thank you! --- Rebuttal Comment 1.1: Comment: Thanks for your responses and they answer my questions clearly! --- Reply to Comment 1.1.1: Comment: Dear reviewer MwTR, Thank you for your reply! We're excited to hear that our response successfully answered your questions. We'll make sure that our rebuttal response is fully incorporated into the final draft. Thank you very much, Authors
Summary: The paper investigates an exploration technique that maximizes the entropy of visited states while taking into account the expected return of the states. The goal is to explore the part of the state space that is both less visited while avoiding too much exploration for the low-value states. Strengths: The paper is overall well written and presents an interesting approach to taking into account an estimate of the expected return together with state entropy exploration. Weaknesses: My main concern is that it is a bit unclear how the presented algorithm actually enforces more exploration on the high expected return states as compared to previous approaches. In Figure 8, I'm unsure how VCSE will have a different behaviour than SE given that the agent needs to at least visit once the reward to be able to see that there is a lower bonus by keeping exploring the low expected return states. In addition, previous approaches that use a combination of intrinsic and extrinsic rewards should also tend to visit more the high expected return parts of the state space. Additional minor comments: - I'm unsure why "supervised setup" is mentioned in line 4 (abstract) and line 29. - line 91: $\gamma$ can't be 1. - Typos: line 105 Technical Quality: 2 fair Clarity: 3 good Questions for Authors: Can you clarify my main concern (see above). If so, I would happily increase my score. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: The authors adequately address limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer CsYB, We sincerely appreciate your efforts and insightful comments to improve the draft. We respond to each of your comments one-by-one in what follows. -------- **Q1. Unclear how the presented algorithm actually enforce more exploration on the high expected return state as compared to previous approach.** **A1.** We would like to clarify that VCSE does not aim to encourage more exploration only on high-value state space, but aims to prevent a scenario where the exploration is biased towards specific regions within the state space. Specifically, VCSE prevents the states with drastically different value estimates from having an effect on computing the intrinsic reward. This prevents the distribution of low- (or high-) value states from affecting exploration around high- (or low-) value states. Thus VCSE can encourage uniform exploration within each value-conditional state space, which is the main difference to the previous approaches that uses a combination of intrinsic and extrinsic reward for exploration. We will try to further clarify this in the final draft. For instance, in Figure 8 of the original draft, one can expect that VCSE and SE would behave exactly the same before it encounters a reward from an environment. We observed that both methods indeed experience successful episodes and the agent learns to visit more states within the optimal path towards the goal at similar steps for both methods. This starts to make a visible difference between SE and VCSE. Specifically, this high visitation count within the optimal path leads to higher SE intrinsic rewards for non-optimal states outside the path, which biases exploration towards non-optimal states. On the other hand, VCSE encourages exploration on both state regions within and outside the optimal path, allowing for quickly learning to solve the target tasks. -------- **Q2. I'm unsure why "supervised setup" is mentioned in line 4 (abstract) and line 29.** **A2.** We use the “supervised setup” term to denote that we consider a setup where the external task reward is available from the environment, following the terminology used in [Laskin et al., 2021]. [Laskin et al., 2021] Laskin, Michael, Denis Yarats, Hao Liu, Kimin Lee, Albert Zhan, Kevin Lu, Catherine Cang, Lerrel Pinto, and Pieter Abbeel. "URLB: Unsupervised reinforcement learning benchmark." arXiv preprint arXiv:2110.15191 (2021). -------- **Q3. $\gamma \in [0, 1]$** **A3.** We followed the notation of [Sutton & Barto, 2018] where the discount factor $\gamma$ is defined to be located within $[0, 1]$. We also note that $\gamma$ can be set to 1 if the task is episodic so that there is a terminating state [Pitis, 2019] [Sutton & Barto, 2018] Sutton, Richard S., and Andrew G. Barto. Reinforcement learning: An introduction. MIT press, 2018. [Pitis, 2019] Pitis, Silviu. "Rethinking the discount factor in reinforcement learning: A decision theoretic approach." In Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, no. 01, pp. 7949-7956. 2019. -------- **Q4. Typos error in line 105** **A4** Thank you for pointing this out. We’ll fix this in the final draft. --- Rebuttal Comment 1.1: Title: The answer didn't clarify my doubts Comment: In Fig 8, the sub figures a) and b) seem to have only one reward at the bottom right. It is still unclear to me why VCSE works better in this case. For the minor comment, $\gamma$ can't be one if the horizon is infinite (which seems to be the case here). The expected return would be unbounded. --- Reply to Comment 1.1.1: Comment: Dear Reviewer CsYB, we respond to each of your comments one-by-one in what follows. A1. Further explanation on how VCSE can work better than SE in Figure 8. First of all, we would like to clarify two things. In Figure 8, RL agents should still do additional exploration for exploiting the task reward after encountering the first reward, because the observation is partially observable and thus there could be states with higher value estimates. Moreover, both SE and VCSE obtain rewards at the similar timestep while it is not that visible in the heatmap from Figure 8. Now we will provide a more detailed explanation of how VCSE can work better than SE in this setup. Because the RL agent tries to exploit the task reward, the agent initially begins to visit more states around the high-value states. However, because SE aims to encourage uniform coverage by visiting both low-value and high-value states similarly, the intrinsic reward for low-value states begins to increase. Then the agent starts to do excessive exploration around low-value states, instead of further exploring states around the rewarding state at the bottom right state in the map. This makes it difficult for the A2C+SE agent to quickly learn to solve the target task. Moreover, we further showed that this issue is not easily addressed by adjusting the scale of intrinsic rewards in Figure 4. On the other hand, VCSE can avoid this issue because it does not consider states with different values when calculating intrinsic rewards. This means that VCSE can still encourage exploration around high-value states, which allows the agent to visit states with higher values and quickly learns to solve the target task. We hope that this explanation answers your question. Please let us know if there still is anything not clear, we will try to further clarify this in the final draft. — A2. Discount factor Thank you for pointing this out. We will incorporate your comment in the final draft. Title: Further Clarification by Authors
Rebuttal 1: Rebuttal: Dear reviewers, We sincerely appreciate your efforts and insightful comments. Following your suggestions, we provide a one-page pdf that contains (i) the analysis on the effect of $\beta$ and $k$ and (ii) additional results on a high-dimensional Quadruped environment. We will incorporate them into the final draft. Sincerely, Authors Pdf: /pdf/370ed19a3cdf74c7f70218687b8a92437bf64f92.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Towards Data-Algorithm Dependent Generalization: a Case Study on Overparameterized Linear Regression
Accept (poster)
Summary: This paper suggests a notion of compatibility, which is an algorithm-dependent and data-dependent term to bound the excess risk of overparameterized linear regression models. Using this notion, the authors come up with three regimes on the covariance matrix, and characterize the compatible region (that is the region of steps where we can train the model with GD and still converge in terms of excess risk). The authors show the connection between compatibility and benign overfitting and also compare to stability bound. Several examples are provided to demonstrate how the compatibility region depends on the covariance structure. Strengths: I find the paper to be well-written and easy to follow. The overall topic of generalization of overparameterized models (even in the linear case) is important. The authors make a considerable effort to introduce existing bounds. Weaknesses: 1. I find the study of overparameterized linear model *together with* early stopping less motivating. In particular, early stopping for linear regression is equivalent to $\ell_2$ regularization. In this case, we can already apply uniform convergence bound for generalization error. In this case, is the motivation mainly to get a better bound? If so, I would suggest the authors move the comparison to stability bound into the main text. This is in contrast to the overfitting case since if we are overfitting, there is no uniform convergence bound at all. 2. One relevant work is missing, [1] (and its related citations) discussed cases when we have different kinds of overfitting. In particular, their theorem 3.1 (b) corresponds to your example 4.1. I would love to see how your results are related to theirs. [1] Benign, Tempered, or Catastrophic: A Taxonomy of Overfitting Technical Quality: 3 good Clarity: 3 good Questions for Authors: See weakness. Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the insightful and meaningful comments which would greatly help us revise our manuscript. The reviewer's major concern comes from whether we can cover the results from the relationship between early-stopping and ridge regression. Below, we show why our results cannot be covered intuitively and defer the technical details later. Firstly, in their relationship, the regularization parameter of ridge regression is set to $1/t$, where $t$ denotes the early-stopping time [1]. In linear regression, such $t$ usually relates to sample size $n$ (e.g., $t = \sqrt{n}$), and therefore, with small regularization, its parameter norm may still be large and generalization bound may fail. Secondly, the information bottleneck principle tells us that minimal steps may provide better results. By relating the early-stopping and ridge regression, the information is partially lost. Therefore our bound would be tighter. We calculate the generalization performance of ridge regression under the regularization $1/t$ and show that it cannot cover our results. We ensure to add the detailed discussion in the revision. Below we do our best to address the reviewer's concerns in detail. >Q1: Early stopping for linear regression is equivalent to $\ell_2$ regularization. In this case, we can already apply uniform convergence bound for generalization error. A1: We thank the reviewer for the constructive comment. Indeed, there is a close relationship between early stopping and ridge regression. For example, [1] proves that early stopping excess risk at time $t$ can be upper bounded by ridge regression excess risk with **regularization coefficient $1/t$**. However, ***our results cannot be covered since the results in [1] do not apply for large time $t$ (leading to small regularization $\lambda = 1/t$)***. For example, consider the case that eigenvalue decays as $\frac{1}{i^{1.5}}$. if we take $t=\Theta\left(\sqrt{n}\right)$ and $\lambda=\Theta\left(\frac{1}{\sqrt{n}}\right)$, it can be shown that for the ridge regression solution, a norm-based generalization bound will give an excess risk of $\Omega(1)$, which only provides a vacuous generalization bound of the early stopping solution. As a comparison, our results give consistent generalization bound for this case. Furthermore, this generalization bound in this approach cannot cover ours, even if we use a more fine-grained analysis rather than uniform convergence. Following Theorem 1 in [2], we can calculate that the variance term is $\Omega(1)$ when $t=\Omega\left(\sqrt{n}\right)$, and the bound is vacuous for the above example. Due to space limit, we are unable to post the proof here. We will add the calculation details as soon as possible in the discussion period. >Q2: One relevant work is missing... their theorem 3.1 (b) corresponds to your example 4.1. I would love to see how your results are related to theirs. A2: We thank the reviewer for the recommendation. We ensure to add a discussion and cite the related papers [3], [4], [5], [6],[7],[8],[9],[10], [11] in the next revision. As pointed out by the reviewer, Theorem 3.1(b) in [3] shows that for feature covariance $\lambda_i=1/i^\alpha$ with $\alpha>1$, the excess risk will converge to $(\alpha-1)\sigma^2>0$, and falls into the **tempered** taxonomy as in [3]. As a comparison, we prove that with early stopping, the model can achieve $o(1)$ excess risk for these data distributions, and falls into the **benign** taxonomy as in [3]. In this sense, our results are consistent with and extend the result in [3]. Again, we would like to express our great gratitude to the reviewer for their thoughtful comments. We are eager to provide any additional clarification that may be needed to help the evaluation. [1] Ali A, Kolter J Z, Tibshirani R J. A continuous-time view of early stopping for least squares regression[C]//The 22nd international conference on artificial intelligence and statistics. PMLR, 2019: 1370-1378. [2] Tsigler A, Bartlett P L. Benign overfitting in ridge regression[J]. J. Mach. Learn. Res., 2023, 24: 123:1-123:76. [3] Mallinar N, Simon J B, Abedsoltan A, et al. Benign, tempered, or catastrophic: A taxonomy of overfitting[J]. arXiv preprint arXiv:2207.06569, 2022. [4] Carrell A, Mallinar N, Lucas J, et al. The calibration generalization gap[J]. arXiv preprint arXiv:2210.01964, 2022. [5] Simon J B, Dickens M, Karkada D, et al. The Eigenlearning Framework: A Conservation Law Perspective on Kernel Regression and Wide Neural Networks[J]. arXiv preprint arXiv:2110.03922, 2021. [6] Liu C, Abedsoltan A, Belkin M. On Emergence of Clean-Priority Learning in Early Stopped Neural Networks[J]. arXiv preprint arXiv:2306.02533, 2023. [7] Emami M, Sahraee-Ardakan M, Pandit P, et al. Generalization error of generalized linear models in high dimensions[C]//International Conference on Machine Learning. PMLR, 2020: 2892-2901. [8] Beaglehole D, Belkin M, Pandit P. Kernel Ridgeless Regression is Inconsistent for Low Dimensions[J]. arXiv preprint arXiv:2205.13525, 2022. [9] Sahraee-Ardakan M, Emami M, Pandit P, et al. Kernel methods and multi-layer perceptrons learn linear models in high dimensions[J]. arXiv preprint arXiv:2201.08082, 2022. [10] Nakkiran P, Kaplun G, Bansal Y, et al. Deep double descent: Where bigger models and more data hurt[J]. Journal of Statistical Mechanics: Theory and Experiment, 2021, 2021(12): 124003. [11] Belkin M. Fit without fear: remarkable mathematical phenomena of deep learning through the prism of interpolation[J]. Acta Numerica, 2021, 30: 203-248. [12] Mehryar Mohri, Afshin Rostamizadeh, and Ameet Talwalkar. Foundations of Machine Learning. Adaptive computation and machine learning. MIT Press, 2012. --- Rebuttal Comment 1.1: Title: Calculations for A1 Comment: ## Uniform Convergence on Ridge Regression We next show that ridge regression with regularization $1/t$ and uniform convergence cannot cover our results. We consider a case with data covariance $\lambda_i=\frac{1}{i^{1.5}}$, and show that ridge regression would return a bound with compatibility region shorter than $(0, \sqrt{n})$, which cannot cover our results with compatibility region $(0,n)$. The uniform convergence bound scales like $O\left(\frac{||\theta_\lambda||}{\sqrt{n}}\right)$ (see, e.g., Theorem 5.10 in [12]), where $\theta_\lambda$ denote the ERM solution of ridge regression with regularization $\lambda$. We calculate the excess risk of $\theta_\lambda$ as follows. The key insight comes from the fact that the noise matters a lot in the parameter norm. $$\mathbb{E}\left[||\theta_\lambda||^2\right]=\mathbb{E}\left[||X^\top (XX^\top+\lambda I)^{-1}Y||^2\right]=\mathbb{E}\left[||X^\top (XX^\top+\lambda I)^{-1}(X\theta^*+\varepsilon)||^2\right] =\mathbb{E}\left[||X^\top (XX^\top+\lambda I)^{-1}(X\theta^*)||^2\right]+\mathbb{E}\left[||X^\top (XX^\top+\lambda I)^{-1}\varepsilon||^2\right]\ge \mathbb{E}\left[||X^\top (XX^\top+\lambda I)^{-1}\varepsilon||^2\right] =\mathbb{E}\text{Tr}\left[X^\top (XX^\top+\lambda I)^{-1}\varepsilon\varepsilon^\top (XX^\top+\lambda I)^{-1}X\right]\ge \sigma_\varepsilon^2 \mathbb{E}\text{Tr}\left[XX^\top (XX^\top+\lambda I)^{-2} \right] =\sigma_\varepsilon^2 \sum_{i=1}^n \frac{\mu_i}{(\mu_i+\lambda)^2},$$ where we assume the noise vector is conditionally unbiased and has a lower bounded variance, as in Theorem B.2. We next assumes that $t = 1/\lambda > \sqrt{n}$. Then, according to Lemma A.1 and B.2 in our paper, it holds that $ \mu_n=\Theta(\frac{1}{\sqrt{n}})=\Omega(\lambda)$ for $\lambda_i=i^{\frac{-3}{2}}$, where $k_0$ denotes the effective rank in the paper. Therefore, it holds that $$\sigma_\varepsilon^2 \sum_{i=1}^n \frac{\mu_i}{(\mu_i+\lambda)^2}=\Omega\left(\sigma_\varepsilon^2 \sum_{i=1}^n \frac{1}{\mu_i}\right) = \Omega(\sqrt{n}),$$ which leads to a vacuous generalization bound in uniform convergence. Therefore, the related compatibility region must be shorter than $(0, \sqrt{n})$. However, in this case, our results show a compatibility region $(0,n)$, which is better. ## Benign Overfitting in Ridge Regression We additionally show that even if we apply benign overfitting in ridge regression, the returned compatibility region still cannot cover our results. To apply Theorem 1 in [2], we first calculate the effective rank $k^*$. Note that by plugging in $\lambda_i=\frac{1}{i^{1.5}}$ and $\lambda = 1/\sqrt{n}$, $$ r_k=\frac{1}{\lambda_{k+1}}\left(\lambda+\sum_{i>k} \lambda_i\right)=\Theta\left(\frac{k^{\frac{3}{2}}}{\sqrt{n}}+k\right), $$ which increase monotonically with $r_n = \Theta(n)$ and therefore $$ k^*=\min \lbrace\kappa: \rho_\kappa>b\rbrace=\min \lbrace\kappa: r_\kappa>b n\rbrace=\Theta(n) $$ According to Lemma 11 in [2], we have $\bar{k}=\Theta(n)$, $k=\min\lbrace k^*,\bar{k}\rbrace=\Theta(n)$ and the first term in the variance part $\frac{k}{n}=\Omega(1)$, which makes the bound vacuous for this setting.
Summary: This paper suggests a new framework for evaluating the generalization error called compatibility. Compatibility takes into account both the data and algorithm and says the two are compatible if there is an early stopping region, as $n \to \infty$ that has zero excess risk. The paper then considers the linear regression framework and provides spectral conditions under which full batch constant step size gradient descent is compatible. Strengths: The main strength of the paper is the generality of the data distributions considered. Specifically, most prior theory work assumes we have isotropic data or some fixed transform of isotropic data. However, this paper considers general anisotropic data. In this setup, they provide mild conditions under which we see that early stopping is beneficial for generalization. The paper also provides clear evidence of the benefits of early stopping. Showing that the min norm solution is not necessarily the optimal solution for generalization. Further, the paper shows that this is true even in the large data limit when the min norm solution is benign. Weaknesses: The are two main weaknesses (one major one minor) 1) The paper is missing the comparison to Madhu S. Advani, Andrew M. Saxe, and Haim Sompolinsky. High-dimensional Dynamics of Generalization Error in Neural Networks. Neural Networks, 2020 While the data assumptions in the above paper are more restrictive, that is isotropic data. The paper exactly characterizes the risk curve as a function of time and determines the optimal stopping time. Hence a proper discussion of this is crucial. 3) While the paper is well written, the paper sweeps technical details under the rug that would be important to know. For example, the constants $c_0$ and $c_1$ are opaque and play an important role. See my questions for more instances. Technical Quality: 3 good Clarity: 3 good Questions for Authors: In Assumption 1 subpoint 2, should it $\lambda_p > 0$? Instead of $\lambda_1 > 0$. Are the Cs on lines 169 and 173 supposed to be the same? Why is $\boldsymbol{Y}$ called the noise vector? In Equations (4), (5), how do we know such an $\ell$ exists? What is $c$ in the statement of Theorem 5.1 in $k_0 \le n/c$? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to express our sincere gratitude to the reviewer for their valuable feedback and insightful comments. We apologize for not making everything clear and ensure to revise our manuscript according to the reviewer's suggestions. Below we do our best to address the reviewer's concerns. >Q1: The paper is missing the comparison to [1]. A1: We thank the reviewer for the recommendation. We ensure to add the discussion with [1] as well as its related papers [2], [3] in the revision. As the reviewer points out, the major difference between our paper and [1] lies in the different assumptions on the data. [1] assumes that the input data, data noise, and teacher model weights all follow i.i.d. isotropic Gaussian distribution. In particular, the data covariance in this setting is an identity matrix. In comparison, our results only require bounded ground truth weights and subGaussian data and noise, which is milder. >Q2: In Assumption 1 subpoint 2, should it $\lambda_p>0$ Instead of $\lambda_1>0$? A2: We apologize for the confusion. Here we indeed assume that $\lambda_1 > 0$ to rule out the all-zero input cases. This holds trivially in practice but we still express it explicitly to make the paper complete. Our results hold even when $\lambda_p=0$, which is a degenerate case and can be transferred to another linear regression problem with lower dimensions. >Q3: Are the Cs on lines 169 and 173 supposed to be the same? A3: It can be treated as the same absolute constant that is large enough. The use of a unified $C$ is for notation simplicity. For example, if we have $\sum_{i>0}\lambda_i< C_1, ||\theta^*||<C_2$, we can take $C$ as the maximum of $C_1$ and $C_2$. Such tricks are because we want to put more focus on the rate of dimension and sample size, as most relevant papers do. >Q4: Why is $Y$ called the noise vector? A4: We apologize for the typo. We will change it to the response vector in the revision. >Q5: In Equations (4), (5), how do we know such an $l$ exists? A5: Thanks for the good catch. We take $l=\infty$ if a finite $l$ does not exist. This would lead to a vacuous generalization bound. We will make it clearer in the next version. >Q6: What is $c$ in the statement of Theorem 5.1 in $k0≤n/c$? A6: $c$ is a large enough constant independent of time $t$, dimension $p$, sample size $n$. Once again, we thank the reviewer for the time and feedback, which will undoubtedly help us improve the quality of our work. We remain committed to improving the clarity and rigor of our paper and welcome any further feedback. [1] Advani M S, Saxe A M, Sompolinsky H. High-dimensional dynamics of generalization error in neural networks[J]. Neural Networks, 2020, 132: 428-446. [2] Saxe A M, Bansal Y, Dapello J, et al. On the information bottleneck theory of deep learning[J]. Journal of Statistical Mechanics: Theory and Experiment, 2019, 2019(12): 124020. [3] Goldt S, Advani M, Saxe A M, et al. Dynamics of stochastic gradient descent for two-layer neural networks in the teacher-student setup[J]. Advances in neural information processing systems, 2019, 32. --- Rebuttal Comment 1.1: Title: Still confused about the c's Comment: Thank you for the clarifications, but I am still confused about the cs. The Cs on line 169 and 173 now make sense to me. I am still unsure about the ones in equations 4 and 5 and the one for Theorem 5.1. Clarification would be appreciated. --- Reply to Comment 1.1.1: Title: Further Clarifications on the Constants Comment: We apologize for any lack of clarity and would like to offer further clarifications below. The constant $c, c_0, c_1$ are mainly served to make the statements more rigorously. One can replace the constants with $\mathcal{O}(\cdot)$ notations to make it more clear. For example, the statements regarding $c$ in Theorem 5.1 is equivalent to $k_0 =\mathcal{O}(n), \log \frac{1}{\delta} =\mathcal{O}(n), \lambda =\mathcal{O} \left(\frac{1}{\sum_{i>0} \lambda_i}\right)$. In this equation, the first condition constrain the regularity of the problem class, the second condition lower bounds the error probability, and the third condition upper bounds the learning rate, which are all standard in related literatures. The constants $c_0,c_1$ can be interpreted similarly. Constants in our paper do not depend on input dimensions ($p$) or sample sizes ($n$) or input covariance ($\Sigma$). These constants may only be associated with other terms assumed to be of order O(1) in the paper, such as the subGaussian parameter ($\sigma_x$). This simplification aims to symplify our results and shift the focus towards generalization bounds concerning input dimensions and sample sizes. This technique is a common practice in the related literature [1, 2]. Therefore, one can select a suitably set of constants (without dependence on input dimensions or sample sizes) for the main Theorems to hold. Difference choices of constants may yield different generalization bounds, but these bounds all share the same rate with regard to sample size $n$, as established in Theorem 5.1 using the $\lesssim$ notation. We again express our gratitude to the reviewer for the feedback. We will make a clear explanation on the constants in the next revision. We are eager to engage in any further discussions to help the evaluation. Thank you! [1] Peter L. Bartlett, Philip M. Long, Gábor Lugosi, and Alexander Tsigler. Benign overfitting in linear regression. CoRR, abs/1906.11300, 2019. [2] Difan Zou, Jingfeng Wu, Vladimir Braverman, Quanquan Gu, and Sham M. Kakade. Benign overfitting of constant-stepsize SGD for linear regression.
Summary: This paper considers the benign overfitting phenomenon in overparameterized linear regression problems under early stopping type gradient descent. Specifically, this paper gives a time variant bound for the excess risk during the gradient descent training. By comparing the optimal time variant bound with the previous bounds for the minimum-norm interpolant, this paper finds some data distribution examples that the optimal early stopping satisfies benign overfitting (compatibility) but the minimum-norm interplant does not. Among all the examples, some of which outperform the previous early stopping type bounds in overparameterized linear regression problems. Strengths: * The organization and presentation of this paper is smooth and clear, and it provides a better understanding on the benign overfitting phenomenon for early stopping type gradient descent. * The theory in this paper is solid. Although it mainly follows from the previous two works *Bartlett et al.* and *Zou et al.*, the proposed notion 'effective dimensions' for the feature covariance matrix in this paper provides an alternative way to analyze the variance term in the Bias–variance decomposition. Weaknesses: There is no major weakness for me. --- A minior issue: For the examples and the numerical experiments in this paper, it would be better if the authors could provide the patterns to show the interplay between the bias term $B(\theta_t)$ and the variance term $V(\theta_t)$ during training. Especially when comparing with the bounds in previous works, it would be more intuitive to see why the compatibility happens but the benign overfitting does not, if the comparisons of the evolutions of $B(\theta_t)$ and $V(\theta_t)$ are shown separately. Technical Quality: 3 good Clarity: 3 good Questions for Authors: * In table 2, for the distributions that $\lambda_i = \frac{i}{i \log^2(i+1)}$ and $\lambda_i = \frac{i}{i \log^3(i+1)}$, both cases are benign overfitting based on *Bartlett et al*, but the min-norm excess risk seems to be very large though, is this caused by the bias term or the variance term? * I am curious what the generalization bound for early stopping will be like if following the notations in *Theorem 4. [Bartlett et al.]* in terms of $r_0(\Sigma), k^*, R_{k^*}(\Sigma)$? I am trying to understand the difference between methods using the proposed effective dimension in this paper and using the effective ranks in *Bartlett et al.*, because understanding the differences may help to motivates other more refined controls for the generalization analysis. Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: NA Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate the reviewer's supportive and insightful comments. We ensure to add the related discussion in the revision. Below, we do our best to address the reviewer's questions adequately. >Q1: For the examples and the numerical experiments in this paper, it would be better if the authors could provide the patterns to show the interplay between the bias term $B(\theta_t)$ and the variance term $V(\theta_t)$ during training. A1: Thanks for the constructive suggestion. In the experiment part, the bias term $B(\theta_t)$ decays rapidly, and the model achieves small excess risk in the initial phase of training (as in our Figure 1). Meanwhile, the variance term $V(\theta_t)$ increases steadily in the training process, and it takes a long time for the variance term as well as the excess risk to converge to a plateau. We also provide the following table to validate the above pattern theoretically. The table shows different bounds on bias and variance in different methods. We will add the discussion in the next revision. | Reference | Setting | Bias | Variance | |----------|----------|----------| ----------| |This paper|gradient descent | $\frac{1}{t}+\frac{1}{\sqrt{n}}$ |$\frac{k_1}{n}+\frac{t^2}{n^2}$| | [1] | overfitting solution | $\frac{k_0}{n}$ | $\frac{n}{R_{k_0}(\boldsymbol{\Sigma})}$ | | [2] | one-pass stochastic gradient descent | $\frac{k_1}{n}$ | $ \frac{n \sum_{i>k_1} \lambda_i^2}{\left(\sum_{i>0} \lambda_i\right)^2}$ | >Q2: In table 2, for the distributions that $\lambda_i=\frac{1}{log^2⁡(i+1)}$ and $\lambda_i=\frac{1}{log^3(i+1)}$, both cases are benign overfitting based on Bartlett et al, but the min-norm excess risk seems to be very large though, is this caused by the bias term or the variance term? A2: We thank the reviewer for the insightful question! Benign overfitting means that the excess risk goes to zero as the sample size $n$ goes to zero. For the practical cases, it may be numerically large for a fixed sample size $n$ (may be due to a slow convergence rate with respect to $n$). As for the cause of the large excess risk, we believe that it comes from the variance term since the bias term is usually easy to learn in linear regression. We will add the discussion in the next version. Although the exact number cannot imply benign overfitting, we can still compare the early-stopping regimes and the interpolation regimes. This is what the experiments want to show in this paper. >Q3: I am trying to understand the difference between methods using the proposed effective dimension in this paper and using the effective ranks in [1]. A3: We thank the reviewer for the insightful question. The differences come from two folds: (a) our proposed effective dimensions is time-dependent ($k_2$) while those in [1] do not need to focus on the time; and (b) due to the time effect, we can split the data covariance with more fine-grained analysis. Therefore, the term $k_0/n$ proposed in [1] can be improved to $k_1/n$ in our paper (where $k_1$ is provably smaller than $k_0$). We ensure to add the comparison in the revision. Once again, we appreciate the reviewer's feedback and hope that our response address the concerns. We remain committed to improving the clarity and rigor of our paper and welcome any further feedback. [1] Peter L. Bartlett, Philip M. Long, Gábor Lugosi, and Alexander Tsigler. Benign overfitting in linear regression. CoRR, abs/1906.11300, 2019. [2] Difan Zou, Jingfeng Wu, Vladimir Braverman, Quanquan Gu, and Sham M. Kakade. Benign overfitting of constant-stepsize SGD for linear regression. --- Rebuttal Comment 1.1: Title: Rebuttal update Comment: I thank the authors for the rebuttal. Having read the rebuttal and other reviews I decided to keep my score. --- Reply to Comment 1.1.1: Title: Thank you! Comment: We thank the reviewer for the supportive feedback. We will remain eager to engage in any further clarifications if needed.
Summary: The authors present a new mathematical framework for characterizing generalization of over-parameterized machine learning models. The new framework provides improved bounds from prior work, and improves on prior work by combining both data and algorithm information. To summarize their contributions: The paper formalizes a notion of compatibility, as a minimal condition for generalization based on the interaction between data and algorithm. They derive sufficient conditions for compatibility for over-parameterized linear regression with gradient descent, and then derive time-variant generalization bounds for the over-parameterized linear regression and show empirical results that verify the motivation of compatibility and demonstrate benefits of early stopping. Caveat: This paper falls outside my area of expertise to the level that in my available time I cannot vouch for the accuracy of the theorems provided in the paper. Strengths: Theoretical proofs are given, along with bounds comparisons, and empirical results. The paper seems well-written and rigorous in its assessment. The problem is well motivated and extremely well-referenced in its citations and descriptions of the current work in this area. The ability and degree to which over-parameterized models can generalize is one of the main mysteries of deep learning, and where traditional theory in generalization breaks down, so the paper is extremely relevant. Weaknesses: This paper requires substantial knowledge of generalization theory to understand and is not accessible to a broader audience. For example, the introduction makes the assumption that terms like excess risk are understood by the reader, before they are later defined. While the theory is claimed to be general, thorough treatment is only given for linear regression, and derivations of sufficiency conditions and bounds for other problems are left as future work. While deep learning was given as a primary motivation to understand generalization for over-parameterized models, application to these models was not addressed in the paper. The experimental and conclusions sections were very minimal. I understand that the theory was the primary focus of the paper, but would have liked to see more practical demonstration and discussion of implications for the theoretical results that can be drawn from your investigations. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: Are there any roadblocks that would make application of this difficult for other models, such as non-linear regression problems including deep neural networks? Are there larger or broader implications of data-algorithm compatibility and your results that you can discuss? Confidence: 1: Your assessment is an educated guess. The submission is not in your area or the submission was difficult to understand. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 4 excellent Contribution: 4 excellent Limitations: Societal impact does not seem relevant or needed to address for this paper. The paper acknowledges a limited scope to linear cases, but does not go into any additional weaknesses. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to express our sincere gratitude to the reviewer for the valuable feedback and supportive comments. The reviewer's major concern comes from the linear example. Indeed, we only derive a linear example to express the effectiveness of compatibility. However, the idea behind is general which would bring some interesting insights into the generalization community. Below we do our best to address the reviewer's concerns. >Q1: Are there any roadblocks that would make application of this difficult for other models, such as non-linear regression problems including deep neural networks? Are there larger or broader implications of data-algorithm compatibility and your results that you can discuss? A1: We thank the reviewer for the insightful and constructive comments. As the reviewer points out, the proposed data-algorithm compatibility is a general concept which can be deployed in deep learning, while the given example mainly focuses on the overparameterized linear regression. Studying a linear example is widely adopted in generalization analysis to understand deep learning, since (a) existing theories on neural tangent kernel [1,2] and random feature model [3,4] (both are theoretical approximations of neural networks) draw a rigorous connection between them, and (b) their generalization performances has some inherent relationships and common patterns, as indicated by our experiments. Due to technical limitations, we cannot directly generalize to neural networks now, but we believe that the concepts in this paper still provide some interesting insights for the generalization community. >Q2: This paper requires substantial knowledge of generalization theory to understand and is not accessible to a broader audience. [...] I understand that the theory was the primary focus of the paper, but would have liked to see more practical demonstration and discussion of implications for the theoretical results that can be drawn from your investigations. A2: We thank the reviewer for the suggestions! We will reorganize the arguments and revise the paper accordingly to make it easier to understand. As an example, we will provide a definition of excess risk in the introduction, and add the discussion on the broader impact. Thank you! Once again, we express our gratitude to the reviewer for the valuable feedback, which will undoubtedly contribute to enhancing the quality of our work. We remain committed to improving the clarity and rigor of our paper and welcome any further feedback. [1] Jacot A, Gabriel F, Hongler C. Neural tangent kernel: Convergence and generalization in neural networks[J]. Advances in neural information processing systems, 2018, 31. [2] Arora S, Du S S, Hu W, et al. On exact computation with an infinitely wide neural net[J]. Advances in neural information processing systems, 2019, 32. [3] Rudi A, Rosasco L. Generalization properties of learning with random features[J]. Advances in neural information processing systems, 2017, 30. [4] Mei S, Montanari A. The generalization error of random features regression: Precise asymptotics and the double descent curve[J]. Communications on Pure and Applied Mathematics, 2022, 75(4): 667-766. --- Rebuttal Comment 1.1: Title: Rebuttal response Comment: I have read the rebuttal presented by the authors, and appreciate their clarifications. I look forward to seeing the reorganization and improvements in clarity in the next revision. --- Reply to Comment 1.1.1: Title: Thank you! Comment: We again thank the reviewer for the precious time and valuable feedback, and make sure to revise the current manuscript to improve its clarity. We remain eager to engage in any further clarifications to help the evaluation.
null
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Generalizable One-shot 3D Neural Head Avatar
Accept (poster)
Summary: Authors proposed a method for one-shot facial animation with high level of details saved from the input image. To handle facial deformations they disentangle expression and appearance and introduces a neural point cloud renderer for that. To train a model authors used quite popular set of loss functions and data, as well as incorporated synthetic data to improve underlying geometry. Qualitative and quantitative experiments show the advantage of architectural design and superiority over existing methods. Strengths: - Idea to decompose geometry and appearance in canonical face is an interesting combination of recent success in neural rendering - The demonstrated method outputs clearly emphasise huge number of saved details and right disentanglement for identity and expression. - The results contains enormous level of the details compared to the demonstrated baselines. - Authors carefully describe each branch contributed in the resulted quality. Weaknesses: The comparison doesn’t have a latent avatar models [1, 2]. It can improve the overall experiments section since both methods produce high-resolutional results for reenactments - [1] One-Shot Free-View Neural Talking-Head Synthesis for Video Conferencing - [2] Megaportraits: One-shot megapixel neural head avatars. The introduced neural point cloud can not cover huge view changes. Some statements are overconfident with respect to the evaluation (mostly regarding to consistent usage SOTA). Lack of ethical section in the main text, which is important for human avatars. Since the emotions at the video are too unrealistic, I want to emphasise that comparison with the methods above is necessary to get more fair outcomes from the paper. The method looks bonded by the pre-aligned geometry and the possibility to express human level poses. Some minor details: Writing quality can be improved. Lack of limitation section. Technical Quality: 3 good Clarity: 3 good Questions for Authors: The neural point cloud introduced in the line 182 is equivalent to dense feature grid, isn’t it? This procedure also similar to latents avatar like [1, 2], but with orthographic projection. Could you elaborate the reasons to include synthetic images and how does affect final quality? For instance you mentioned that CelebV-HQ is static (which is debatable) is it a help somehow? Could you compare expression transfer? As I see in the first video results there is a clear gap of the emotional level or naturally during the reenactment. What is a time needed for an inference? Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The method by itself has limited geometry ability to describe human heads. Based on the presented image there is not information about upper body. There is not option to render 360 degree view paths. Lack of the realistic emotions. I doubt that the method can be easily used in real-time applications. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Comparison to latent 2D avatar methods.** - Please see A1 in the Global Response above. Note that we cannot compare to Megaportraits since its code is not publicly available. For "One-Shot Free-View Neural Talking-Head Synthesis for Video Conferencing", we used a public third-party implementation because the official code is not released (we have sent the code link of this implementation to AC since we cannot add external links in the rebuttal text). **Point clouds cannot cover huge view changes.** - The neural point clouds indeed only cover the visible parts in the input portrait, but the canonical branch is developed to provide cues of missing parts and guarantee a realistic full reconstruction. This is exactly our main motivation for disentangling coarse geometry and intricate details into two tri-planes. The former learns to map a portrait image in any pose to a canonical coarse reconstruction implicitly in a holistic manner, while the latter explicitly lifts the partial observation from the 2D space using camera pose and rendered depth. Through training, the two branches learn to be compatible with each other and produce reasonable full reconstructions for inputs captured from various views. As shown in Fig.1 (right) in the rebuttal PDF, even for a source image captured from an extreme profile view, our method produces reasonable reconstruction, while 2D baselines present unrealistic warping artifacts. We do think that further improvement is possible for the neural point cloud part. For instance, one way is to inpaint the neural point cloud by utilizing the symmetry prior of human portraits, we leave this exploration in future works. **Synthetic images.** - Why do we need synthetic data? We first analyze the variation between source and target poses in the CelebV-HQ and the synthetic datasets. We randomly sample 1000 training pairs and compute the pose distance between the source and target image in each pair. The average pose difference in CelebV-HQ and the synthetic dataset is 0.0482 and 0.0985, respectively. This shows that sampled pairs in CelebV-HQ have less pose variation. Instead, the source and target images in the synthetic data have larger pose variation, which enforces the model to hallucinate realistic missing parts during training. We further carry out an ablation study that learns a model without using synthetic data. As shown in the last column in Fig.4 in the rebuttal PDF, the model learned without synthetic data shows artifacts in the occluded region (e.g., the left side of the nose). - What alternative generative models could be used to produce synthetic data and how does it affect our model? We kindly point to A2 in the Global Response above, where we show that by replacing EG3D with Next3D [2] in the training data synthesis process, our model can be further improved. - We would like to emphasize that utilizing pre-trained generative models for data synthesis demands minimal effort. In our case, the synthesis of over 50,000 training images using either EG3D or Next3D takes just a few hours. [2] Next3D: Generative Neural Texture Rasterization for 3D-Aware Head Avatars **Inference time.** - We have discussed efficiency in Line 333 - 337 in the main paper. We emphasize that our method is efficient since it is generalizable and does not require any person-specific optimization or test-time fine-tuning. Real-time is not a primary focus in this paper. However, we do identify possible acceleration methods such as replacing SegFormer encoders with more light-weight networks, reducing rendering resolution, or using mixed precision inference. **Upper body and 360 degree view paths.** - Our method outperforms existing methods by capturing high fidelity details in regions beyond the face area (e.g., hair, earrings). Upper body human capture is related to human body deformation and especially challenging in the generalizable setting. Even in works that overfit a model to a specific identity [3], reconstructing the upper body is non-trivial and requires a dedicated neural radiance field. We leave upper body reconstruction for future research. Similarly, 360 degree head reconstruction requires advanced representations than the native tri-plane and special training data, as discussed in [4]. Though we believe that our method can generalize to full heads using the new techniques and data in [4], it is not the focus of this paper and is left out for future research. [3] AD-NeRF: Audio Driven Neural Radiance Fields for Talking Head Synthesis. ICCV 2021. [4] PanoHead: Geometry-Aware 3D Full-Head Synthesis in 360deg. CVPR. 2023. **Ethical and limitation section.** - We have discussed the limitations and the social impact, in detail, in the supplementary document and will move them to the main paper in the next revision. **Overconfident statements.** - We will remove SOTAs. **Limited geometry ability.** - Please see A2 in the Global Response, Fig.4 in the rebuttal PDF, and the video we sent to AC. We show the extracted mesh in Fig.4 in the rebuttal PDF, demonstrating that the expression branch can deform the underline geometry of the canonical tri-plane to match extreme expressions (e.g., wide open mouth) so long as we use balanced training data. --- Rebuttal Comment 1.1: Comment: Following a careful review of their response and the input from other reviewers (especially in conversation with R. PU1D), I am convinced that the additional experiments they carried out have led to a more comprehensive evaluation of their proposed approach. This has undoubtedly supported the quality of the work, and it's worth noting that the authors gave clear explanations on all aspects. I will keep my original rating (WA) and would tend for acceptance.
Summary: The method aims to build a generalizable model to create an animatable 3D human head representation from a single-view portrait source image. The resulting representation can be used to reenact the source image with target images with different subjects and expressions. The key idea is to use three tri-planar representations (T_c, T_p, and T_e) with the underlying 3DMM from [14]. T_c is a canonical tri-plane representing the neutral source face. Since the source face can be in arbitrary expressions, this branch is responsible to undo the expression. An encoder pretrained with SegFormer [59] maps the input source image to a tri-plane with 32 feature channels. To enforce T_c to be in the neutral face, this branch relies on [14] to extract the identity and expression coefficients. The neutral expression obtained by zeroing out the expression coefficients T_p is responsible for capturing detailed appearance. Neural point cloud representation is constructed from the depth image of T_c viewed from the camera of the source image, with features associated with the source image, again obtained from an encoder pretrained with SegFormer [59]. This neural point cloud is finally rasterized to the appearance tri-plane T_p. T_e represents a tri-plane based on the 3DMM of the source identity and the target expression extracted with [14]. The frontal view rendering of this identity and expression is passed to an encoder pretrained with SegFormer [59] to obtain an expression tri-plane T_e. The image generated from the sum of three tri-planes goes through a super-resolution with a pre-trained GFPGAN to obtain the final image. The method is compared against ROME [29] and StyleHeat [65] in the main paper. It is also compared against a recent work Next3D in the supplementary. Strengths: The use of the different tri-planar representations, each making use of the information from the 3DMM prediction, is interesting. A setup only requiring a feedforward pass without test-time optimization is simple and practical. Weaknesses: A naive addition of three tri-plane representations may not make sense. T_c is representing the neutral face while T_p is obtained from the source image with an expression. A corresponding position on a face will have the spatial coordinates in T_c and T_p, making this addition insensible. Perhaps consider warping the tri-planes before adding them? The first sentence in 3.2 says the appearance branch is to reconstruct intricate facial details. This may be somewhat misleading. What the appearance branch is really responsible for is capturing the details of the non-facial regions. This is clear from Fig. 2 (e) showing a better reconstruction of the hat, hair, and necklace. Fig. 5 (f) is also highlighting the earring. Given the above, the proposed setup would have made more sense if the appearance branch masked out the facial region, in contrast to the canonical branch masking out the non-facial region when taking the loss. Perhaps related to this, the video shows visual artifacts. The result looks like the facial parts are animated textures on a non-deforming geometry. This is clear when the target subject opens the jaw at 0:20 and 1:04 but the resulting face opens the mouth without moving the jaw. This is a sign that the neutral face geometry is having too much influence. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: If authors have more thoughts on why the visual artifact with the jaw open and extreme expressions happens, I would like to hear them. Other than the naive composition of the tri-planes, I suspect the underlying 3DMM may not be sufficient to capture realistic deformations. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: As mentioned in the weakness, the method exhibits clear artifacts with large facial deformations. This should be clarified. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Appearance branch.** - Please see A3 in the Global Response above. As discussed in the supplementary (Line 105 - 108), to prevent the source expression from leaking into the final animation results, the appearance branch masks out the facial region in the input source image and only captures the non-facial details, while the expression reconstruction solely coming from the expression branch. We will clarify this more in the next revision. **Performance and artifacts in rare expressions.** - The proposed method efficiently generalizes a single network to unseen identities. while demonstrating competitive 3D head avatar animation performance quantitatively and qualitatively against multiple state-of-the-art baselines. As shown in Sec.4 in the main paper and Sec.3 in the supplementary, our model can produce realistic animation results for most common target expressions. - For the "jaw opening artifacts", please see A2 in the Global Response, Fig.4 in the rebuttal PDF, and the video we sent to AC. We emphasize that the mouth-wide-open expression falls into the long-tail distribution of the training dataset, causing the artifacts. It does not demonstrate a fundamental flaw in our framework's design. With proper training data discussed in A2 above, our method can naturally deform the jaw when the mouth is wide open. We also show the extracted mesh in Fig.4 in the rebuttal PDF, demonstrating that the expression branch indeed changes not only the texture but also the underline geometry of the canonical tri-plane. --- Rebuttal Comment 1.1: Title: Not capturing jaw open, a very common facial deformation, is a flaw in design Comment: The two frames I pointed out where the artifacts of the underlying geometry not deforming are just examples to clarify the issue. Overall the animations look creepy because the texture is unnaturally swimming over the facial geometry with little deformations. I argue that jaw opening is a very important part of facial deformation which any facial system should strive to do well at. Extreme jaw openings may appear less frequently in the data, but the inability to capture such deformations is a sign that the model cannot handle more subtle jaw openings appearing more frequently. An objective proof of the importance of the jaw opening is how the FLAME facial model decided to put an explicit LBS-based jaw control on top of the PCA-based facial deformations. As I noted in my initial review, the setup would have made more sense if the T_c representing the neutral face was warped according to the underlying 3DMM, and then added together with other triplanes. It is not a surprise to me that T_c without warping failed to capture facial deformations. --- Reply to Comment 1.1.1: Title: The "Jaw-opening issue" can be resolved when we use balanced training data Comment: We thank the reviewer for the response. We agree that “naturally modeling jaw opening” is of crucial importance. We kindly point the reviewer to *Fig.4 in the rebuttal PDF* and *A2 in the Global Response above*, where the *“jaw opening” issue is resolved* by using balanced training data with more jaw-opening images. This validates our assumption that the “jaw opening” issue is caused by the long-tail training data, rather than a fundamental design flaw in our framework. For the warping idea, we kindly point the reviewer to *A3 in the Global Response above*. We have tried applying explicitly warping such as a learnable flow or a TPS transformation to the appearance tri-plane but found all led to trivial solutions (i.e., the tri-plane collapses to a thin plane). Interestingly, we found an effective implicit warping design and has discussed it in details in *A3 in the General Response* above. Please let us know if you have more questions.
Summary: The paper proposes a novel approach for building single-shot animatable head avatars. This is achieved through three branches: the canonical reconstruction branch, the detailed appearance branch, and the expression branch, all of which are represented as tri-plane nerfs. The three tri-planes are added together to form the final 3D representation. The paper also features a super-resolution module, which not only achieves much better photorealism than existing methods but also nicely hallucinates teeth while retaining multi-view consistency. Strengths: The usage of separate tri-planes for canonical, appearance details, and expressions is a novel approach (although there are some weaknesses associated with this design choice, as discussed in the following sections). Along with the super-resolution block, the method generates high-quality images and surpasses previous works in various image quality and diversity metrics. The proposed expression branch can successfully hallucinate teeth during deformation. Additionally, multi-view consistency is well-preserved, as demonstrated in the supplementary video. Overall, the approach is thoroughly evaluated and compared with state-of-the-art (SOTA) baselines. Weaknesses: Weakness of the result: The method has difficulty with jaw opening. In the supplementary video, when the person opens the mouth, only the lips move and not the jaw. I suspect that this is because the additive expression branch is not powerful enough to model such big deformation. What’s the author’s explanation of this? Appearance branch: Using the depth of the source image, the paper proposes to back-project the source image feature to 3D space in the appearance branch. It seems to me that this branch not only adds canonical appearance details but also would entangle the source expressions into the 3D reconstruction. Since the expression branch does not take into account the source expression parameters, how does the method remove the source expression from the 3D reconstruction? Does the network learn to ignore expression-dependent details in the appearance branch through neural network magic? How does R_v(T_c + T_p) look like in Figure 2? Invalid ablation on the expression branch: The paper shows that the proposed expression branch performs better compared to a ‘linear expression’ ablation. However, this ablation baseline does not make sense to me because the 3DMM expressions are linear blendshapes, i.e. the warping fields are linearly combined, not the 3D volumes. Therefore, expressions cannot be expressed as the weighted average of expression triplanes. This ablation setup is not valid in my opinion. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: Please see the first two points of the weaknesses section. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 4 excellent Contribution: 2 fair Limitations: Yes, the author adequately discussed limitation and negative societal impact in the supplementary material. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Weakness of the result.** - Please see A2 in the Global Response above, Fig.4 in the PDF and the video we sent to AC. The "jaw opening" issue is caused by the long-tail expression distribution of the training dataset rather than using the additive expression branch. With training data containing a more balanced distribution of large facial expressions, our method naturally deforms the jaw when the mouth is wide open. **Appearance branch.** - Please see A3 in the Global Response above, the visualization of $R_v(T_c + T_p)$ can be found in Fig.2 (a) in the rebuttal PDF. In the submission, we remove the source expression by masking out the expression-dependent regions (i.e., eyes and mouth) before feeding the source image into the appearance branch. In A3 in the Global Response above, we further show that learning a model that implicitly ignores the source expression is feasible and shows promising preliminary results. **Invalid ablation on the expression branch.** - We will remove this ablation study in the next revision. --- Rebuttal Comment 1.1: Comment: The rebuttal partially resolved my concerns and I will keep my original score.
Summary: This paper presents a method for generating Neural Head Avatars given a single image of a subject. More specifically, the input is a source image of a person as well as a target image specifying the expression, and the goal is to generate a rendering of the person in the source image with the expression of the target image. To do so, the method predicts 3 sets of tri-planes modelling the coarse geometry and appearance of the source image, and the expression of the second image. The geometry and appearance are modeled in a canonicalized 3D pose to encourage robustness to different poses and varying viewpoints. After training, the method can be applied to unseen images and generate animatable avatars without requiring any expensive test-time optimization. Overall the experiments validate the effectiveness of the approach. Strengths: 1. The methodology of the paper is technically sound. The components of the method are sufficiently motivated and their usefulness is assessed in the ablation study. 2. The paper has very good experimental results. Quantitatively, it performs favorably compared to the state-of-the-art in the CelebA and HDTF datasets. The qualitative results are also very good. 3. The paper is well-written and easy to follow. Most components are adequately described in the main text and the figures are relatively easy to parse (with the exception of some parts of Figure 1. where the flow of information is not immediately clear). Weaknesses: The paper does not have any particularly important weak points. Occasionally there are some artifacts in the reconstruction for parts of the face that are not visible in the source image (e.g. Row 2 of Figure 3). Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. I was wondering how accurate is the 3DMM prediction stage, and how much this affects the overall quality of the reconstructions. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The paper fails to discuss the potential societal impacts of this work. One obvious misuse of the technology could be in DeepFakes. I believe that the authors should include a more detailed overview of the potential misuses and what safeguards can be used. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **3DMM accuracy.** - Since our expression branch solely relies on the 3DMM renderings to provide target expression information, it is indeed affected by the 3DMM's accuracy. When the 3DMM model fails to predict the target expression correctly from the target image, our model replicates the inaccurate expression in the 3D reconstruction as well. We thank the reviewer for pointing out this limitation and we will discuss it in the next revision. **Social impact.** - We have discussed the limitations and the social impact of our work in details in the supplementary and will move them to the main paper in the next revision. --- Rebuttal Comment 1.1: Comment: I read the other reviews and the rebuttal. Reviewer PU1D raised some interesting points, especially regarding the warping and the jaw opening artifacts. The authors overall provided satisfactory explanations. I will keep my original rating (WA).
Rebuttal 1: Rebuttal: ### A1. Comparison to 2D baselines (**ufhJ**, **k8VD**) **Motivation.** We discuss the benefits of 3D avatars towards talking head synthesis in Lines 35 - 37 and Lines 317 - 319 of the main paper. Other important reasons to study 3D avatars are: - Human faces are inherently 3D, thus it is physically accurate to model faces in 3D. As shown in Fig.1 in the PDF, compared to 2D baselines, our method has fewer warping artifacts, produces consistent geometry across different views, and is robust to profile view inputs. - For practical usage, mentioned by **ufhJ**, a 3D avatar is essential in immersive AR / VR applications so that a person can be rendered from novel views to convey eye contact. Meanwhile, 2D methods only change head pose but do not address novel view synthesis easily. Furthermore, traditional pipelines (e.g., gaming) require 3D assets as inputs, which can be extracted from our method. **Experiments.** To empirically verify the advantages of our method compared to 2D baselines, we carry out cross-identity reenactment on CelebA-HQ and show results below: | Method | CSIM &uarr; | AED &darr;| APD &darr;| FID &darr;| Year| | ------------- | ------------- | ------------- | ------------- | ------------- | ------------- | | [1] FOMM | 0.528 | 0.284 | 0.034 | 37.31 | NeurIPS 2019| | [2] FaceVid2Vid | 0.638 | 0.284 | 0.037 | 24.66 | CVPR 2021| | [3] Thin-Plane | 0.556| 0.272| 0.029| 34.75| CVPR 2022| | [4] DaGAN | 0.502| 0.276| 0.032| 36.78| CVPR 2022| | [5] FNeVR | 0.520| 0.283| 0.031| 33.18| NeurIPS 202 | [6] LIA | 0.607| 0.274| 0.037| 34.33| ICLR 2022| | [7] DPE | 0.635| 0.285| 0.048| 43.38| CVPR 2023| | [8] MCNET |0.491| 0.262 | 0.030 | 38.89| ICCV 2023| | Ours | **0.649**| 0.269| **0.018**| **18.68**| | | Ours* (A3 below) | 0.716| 0.262| 0.019|20.70| | [1] First order motion model for image animation. [2] One-Shot Free-View Neural Talking-Head Synthesis for Video Conferencing. [3] Thin-plate spline motion model for image animation. [4] Depth-Aware Generative Adversarial Network for Talking Head Video Generation. [5] FNeVR: Neural volume rendering for face animation. [6] Latent Image Animator: Learning to Animate Images via Latent Space Navigation. [7] DPE: Disentanglement of Pose and Expression for General Video Portrait Editing. [8] Conditioned Memory Compensation Network for Talking Head video Generation. Overall, our method has better performance. Qualitative results are shown in Fig.1 in the PDF. In the left example, our method preserves the geometry of the input, while the baselines all "squeeze" the head. In the right example, our method hallucinates realistic missing parts with high fidelity (e.g., wrinkles around the eyes) when the source image is captured from a profile view, while all baselines show warping artifacts. --- ### A2. Jaw opening issue (**8oNf**, **PU1D**) This issue is caused by the long-tail expression distribution of the training data, rather than being a limitation of our method, (i.e., “mouth wide open” is rare in the training dataset). To verify, we replace EG3D with a deformable 3D GAN [1] for data synthesis, which has two advantages: a) each training pair includes different expressions that encourage the model to produce more accurate deformation. b) the data synthesized by [1] includes larger and more balanced facial deformation. We re-train our method using this dataset and show qualitative results in Fig.4 in the PDF. This model realistically deforms the jaw when the mouth is open, validating our hypothesis. Please also see the video sent to AC. [1] Next3D: Generative Neural Texture Rasterization for 3D-Aware Head Avatars. --- ### A3. Learning warping in the appearance branch. (**FcT2**, **PU1D**) **How does the appearance branch ignore source expression?** - As discussed in the supplementary (Line 105 - 108), we mask out expression-dependent regions (i.e., eyes and mouth) in the source image before it is input into the appearance branch. This way, expression information solely comes from the expression branch. - As suggested by **8oNf**, we visualize the rendering of $R_v(T_c + T_p)$ in Fig.2 (a) in the PDF, which is a neutral face with intricate details. This shows that $T_p$ adds details while preserving the expression in $T_c$. **Can we learn warping in the appearance branch?** - Although the “masking” strategy effectively removes the source expression in the appearance branch, the warping idea suggested by **PU1D** and **8oNf** is inspiring. Interestingly, we found that simply using the appearance encoder to implicitly warp and fuse the appearance and target expressions works best (see Fig.3 in the PDF) as opposed to explicit warping. Specifically, we keep the canonical branch unchanged and merge the appearance and expression branches. This merged branch inputs the concatenation of the source image, canonical rendering, 3DMM rendering with source and target expressions respectively and the source image's mouth mask to an encoder that predicts 2D features. Then it uses the “Lifting” and “Rasterization” operation (Sec. 3.2 in the paper) to produce a tri-plane $T_{ae}$ that encodes both the appearance and the target expression. The intuition is to make the encoder $E_{ae}$ aware of both expressions (from the 3DMM renderings and the mouth mask) and the appearance (from the source image) simultaneously, such that it implicitly learns to transfer details from the source image to proper locations in $T_{ae}$. We show results in Fig.2 (b) in the PDF and evaluation in the above table (i.e., Ours*). This model has a higher CSIM, because without masking out eyes and mouth in the appearance branch, the model preserves eye and lip color well. *We emphasize that the “masking” strategy in the paper is sufficient to remove the source expression, but we thank the reviewers for the inspiring warping idea. The preliminary experiment above proves that this idea is feasible and introduces new directions for future work.* Pdf: /pdf/b859c750584799de66266d49aba955a6e7d0715e.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: 1. This work proposed a new framework for generalizable head animation in one-shot setting. 2. The results are better than several baselines. Strengths: 1. The target problem is important. 2. The method is reasonable. Weaknesses: 1. Task definition. 1) The practical usage situation need to be clarified? The task is like a 3D version of facial animation. However, the view points changing range is limited and the missing part is not painted. Thus, what is the actual application of this task? Why we should make facial animation 3D-aware when the view point is somewhat limited. 2) The title is miss leading. It looks like a work stressed generalizable ability. However, it is in generalizable ability and 3D-aware. In 2D, there are already a lot work that is generalizable. 2. Comparison. 1) The compared baselines are insufficient. The work in 2D (such as [1,2,3] ) is still needed to be compared, even only in the unchanged view-points as driven videos'. 2) The comparison on CelebV-HQ is necessary. Recently, only HDTF is used in evaluation. However, HDTF is much simpler than CelebV-HQ. The results on a more challenging dataset CelebV-HQ can be much more convincing, both in the comparison on main paper and video. 3. Limitations. Limitations should be discussed in the main paper. [1] First order motion model for image animation. [2] Thin-plate spline motion model for image animation. [3] FNeVR: Neural volume rendering for face animation. Technical Quality: 3 good Clarity: 3 good Questions for Authors: None. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: No. Limitations are not discussed in the main paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Task definition and 2D baselines.** - Please see A1 in the Global Response above and Fig.1 in the rebuttal PDF. We emphasize that compared to the 2D baselines, our method learns a strong 3D geometric prior of human faces and hallucinates more realistic missing parts, especially when the source portrait is captured from an extreme profile view. - We will change the title to “Generalizable One-shot 3D Neural Head Avatar” in the next revision. **Evaluation dataset.** - We have discussed the training and evaluation datasets in Section 4.1 of the main paper. Including the baselines added in this rebuttal, we have compared our method to 13 baseline methods in total, each of which is trained with different training datasets. It is infeasible for us to re-train all of them on the same training datasets yet guarantee to achieve their best performance. Thus we follow prior work [1] and carry out evaluation on the *CelebA-HQ* dataset and the testing split of the HDTF dataset. We believe this is a fair comparison since none of the baseline methods, nor our method, has seen these two evaluation datasets during training. Meanwhile, CelebV-HQ is one of our training datasets. Thus, it is not fair to other baselines if we run evaluation on it. We also emphasize that *CelebA-HQ* is a challenging dataset, which includes realistic portrait images with high fidelity details. [1] Styleheat: One-shot high-resolution editable talking face generation via pre-trained stylegan. ECCV, 2022. **Limitations.** - We have discussed the limitations and the social impact of our work in detail in the supplementary and will move them to the main paper in the next revision. --- Rebuttal 2: Comment: Hi, the reviewer ufhj Does the rebuttal address your concerns? Any update about your final decision? Best, the AC
null
null
null
null
null
null
Distance-Restricted Folklore Weisfeiler-Leman GNNs with Provable Cycle Counting Power
Accept (spotlight)
Summary: The paper proposes a version of the 2-dimensional Weisfeiler-Leman algorithm that restricts the distance between nodes in the node tuples processed. A GNN based on this restriction is also proposed and investigated regarding its ability to count cycles and other substructures. Strengths: **S1** In general, the paper is well written and clearly structured. **S2** The idea to restrict the distance is interesting and seems to work reasonably well for counting substructures. Weaknesses: **W1** The description of the main contribution d-DRFWL(2) could be improved, making the illustration clearer. The general concept behind d-DRFWL(2) could be explained better. What exactly is updated with what information? The formula represents it, but the corresponding text and the example (Figure 1) could be more precise. It does not help that the figure is colored and the colors are referenced. I cannot distinguish red/violet/pink here. **W2** The theoretical investigation can be extended to show how the approach relates to k-WL and $\delta$-k-LWL. **W3** More benchmark datasets should be used for evaluation. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: **Q1** Is the approach as expressive as FWL(2) theoretically? **Q2** How does the proposed approach fit into the k-WL/ $\delta$-k-LWL hierarchy? **Q3** Can you state the differences and connections to $\delta$-k-LWL more clearly? **Q4** In d-DRFWL(k) what happens for larger values of d and k for theoretical expressiveness, running time and classification accuracy? ### Other remarks: The spacing between the lines is off at some places (for example l108/109 where characters from different lines overlap). Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: To me the theoretical expressiveness did not get discussed to the right extend (see Q2). Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your insightful comments and suggestions. We respond to all weaknesses and questions below. **Reply to W1.** Thanks for the valuable suggestion. To understand the concept behind $d$-DRFWL(2), one has to first understand FWL(2). FWL(2) works by (i) assigning each 2-tuple $(u,v)\in\mathcal{V}_G^2$ a representation $W(u,v)$; (ii) updating the representation $W(u,v)$ iteratively. The update of $W(u,v)$ relies on an additional node $w$, and information is collected from both $(u,w)$ and $(w,v)$; afterwards, the above-collected information from all $w\in\mathcal{V}_G$ is gathered into a multiset, and the multiset is hashed into value $W(u,v)$ as the updated representation of $(u,v)$. $d$-DRFWL(2) differs from FWL(2) in two points: (i) only $(u,v)$ with a limited distance $d(u,v)\leqslant d$ are given a representation; (ii) when updating $W(u,v)$, only information from $w$ with both $d(u,w)\leqslant d$ and $d(w,v)\leqslant d$ is gathered into the multiset. From the above description, one can see $d$-DRFWL(2) as a pruned version of FWL(2), in which representations for node pairs $(u,v)$ with $d(u,v)>d$ are deleted. So far we have explained how $d$-DRFWL(2) works, what information is collected and what is updated in $d$-DRFWL(2). Now let us explain Figure 1, which illustrates 2-DRFWL(2). As explained above, 2-DRFWL(2) first assigns a representation to all pair of nodes with mutual distance $\leqslant 2$. Then, these representations will be updated. Since in Figure 1 we have $d(u,v)=2$, $(u,v)$ is among the 2-tuples which will be assigned a representation. We exclusively focus on the update procedure of $W(u,v)$ in Figure 1. From point (ii), only those nodes $w$ with $d(u,w)\leqslant 2$ and $d(w,v)\leqslant 2$ play a role in the update procedure. Therefore, $w$ can be any of $u,v,x,y,z,t,r$ in Figure 1. We further classify the seven nodes based on their distances to $u$ and to $v$: |Nodes|Distance to $u$|Distance to $v$|Color| |:-:|:-:|:-:|:-:| |u|0|2|violet| |v|2|0|red| |x, y|1|1|green| |z|1|2|blue| |t|2|1|yellow| |r|2|2|pink| Each of these nodes contributes a piece of information like $(W(u,w), W(w,v))$, and they are then gathered to update $W(u,v)$. The colored lines in Figure 1 are simply depictions for node pairs $(u,w)$ and $(w,v)$, with $w\in\{u,v,x,y,z,t,r\}$. We will further clarify the text describing $d$-DRFWL(2), and change the colors in Figure 1 to more easily distinguishable ones in our revision. **Reply to W2.** Thanks for the suggestion. Since 2-WL is equally expressive as WL(1), and 3-WL is equally expressive as FWL(2) [a], our methods strictly lie between 2-WL and 3-WL. As for the $\delta$-$k$-LWL hierarchy, it is known from [b] that for $k=2$, $\delta$-2-LWL is as expressive as SSWL proposed in [b]. However, the relation in expressiveness between SSWL (or $\delta$-2-LWL) and $d$-DRFWL(2) is unknown for arbitrary $d$. Actually, for any finite $d$, two $(3d+1)$-cycles and one $(6d+2)$-cycle can be separated by SSWL($\delta$-2-LWL) but not $d$-DRFWL(2). This shows that **no $d$-DRFWL(2) with a fixed $d$ value can be more powerful than SSWL($\delta$-2-LWL)**. On the other hand, for graphs with diameter $\leqslant d$, $d$-DRFWL(2) has equal power to FWL(2). Since it is shown in [a] that there exist graph pairs separable by FWL(2) but not SSWL, with d becoming larger than the diameter of such graph pairs, $d$-DRFWL(2) can separate that pair of graphs. This shows that **with sufficiently large $d$, the power of $d$-DRFWL(2) is neither stronger nor weaker than SSWL($\delta$-2-LWL)**. Nevertheless, it remains an open question the relation in discriminating power between SSWL($\delta$-2-LWL) and $d$-DRFWL(2) **with a smaller $d$ value**, such as $d=2$ or $d=3$. Finally, since for larger $k$ values, the precise relationship in expressiveness between $\delta$-k-LWL and $k$-WL has not been established, the relation between $d$-DRFWL(2) and $\delta$-k-LWL also remains an open problem. [a] N. T. Huang and S. Villar. A short tutorial on the weisfeiler-lehman test and its variants. In ICASSP 2021 - 2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) [b] B. Zhang, G. Feng, Y. Du, D. He, and L. Wang. A complete expressiveness hierarchy for subgraph gnns via subgraph weisfeiler-lehman tests. arXiv preprint arXiv:2302.07090, 2023. **Reply to W3.** Thanks for the suggestion. We further evaluate 2-DRFWL(2) GNN on the two graph-level property prediction datasets, `Peptides-func` and `Peptides-struct` of Long Range Graph Benchmarks (LRGB) [a]. For experiments on both datasets, we use 5 2-DRFWL(2) GNN layers with hidden size 120. We train 200 epochs using Adam optimizer with initial learning rate 0.001, and plateau scheduler with patience 10, decay rate 0.5, and minimal learning rate 1e-5. The batch size is 64. The results are shown in Table 4 of the PDF. From the table, we see that our method outperforms almost all message passing GNNs, and even achieves comparable performance to transformer-based models on `Peptides-struct`. Therefore, we conclude that although designed to capture local structural information, our model still learns long-range dependencies well. [a] V. P. Dwivedi, L. Rampášek, M. Galkin, A. Parviz, G. Wolf, A. T. Luu, D. Beaini, Long Range Graph Benchmark **Reply to Questions** Due to space limit, we only provide brief responses to your questions. We are willing to append the full responses during the author-reviewer discussion. *Q1.* No, our method is strictly less powerful than FWL(2) *Q2.* Our method lies strictly between WL(2) and WL(3), while the comparison with $\delta$-k-LWL is left open Due to space limit, we have to defer our response to the other questions in the author-reviewer discussion phase. We are sorry for the inconvenience, and will add the remaining response immediately when the discussion period begins. --- Rebuttal Comment 1.1: Title: Author Response to Reviewer TSJX, continued Comment: We now respond to the questions posed by Reviewer TSJX. Those responses are absent in the our rebuttal due to space limit. **Reply to Q1.** The answer is negative. In Theorem 3.2 of our paper, we have established the strict separation result between the power of FWL(2) and $d$-DRFWL(2), for any $d$. The intuition is that since $d$-DRFWL(2) only keeps the representations for 2-tuples $(u,v)$ with $d(u,v)\leqslant d$, information from 2-tuples $(u,v)$ with $u$ and $v$ sufficiently distant (compared with $d$) is inevitably lost. On the contrary, the update rule of FWL(2) is global, meaning that the information is gathered from the entire graph instead of the vicinity of the current 2-tuple. Therefore, FWL(2) can capture global information while $d$-DRFWL(2) cannot. This accounts for their expressiveness gap. **Reply to Q2.** See our response to **W2**. **Reply to Q3.** Both our method and $\delta$-k-LWL are trying to leverage sparsity to accelerate higher-order algorithms. However, there are two major differences between our method and $\delta$-k-LWL. First, our method is built upon FWL(2), while $\delta$-k-LWL is built upon k-WL. This results in different update rules between our method and $\delta$-k-LWL. In our method, a 2-tuple $(u,v)$ receives information of the form $(W(u,w),W(w,v))$, where $w$ is any node that has distance $\leqslant d$ to both $u$ and $v$. In $\delta$-k-LWL, a k-tuple $(u_1,\ldots, u_k)$ receives information from other k-tuples of the form $(u_1,\ldots, u_{j-1},w,u_{j+1},\ldots, u_k)$, where $w$ is a neighbor of $u_j$. Second, our model reduces both the messages to pass and the representations to store. In contrast, $\delta$-k-LWL only reduces the messages to pass, but the number of representations to store is not reduced. Therefore, both our model's time complexity and space complexity is lower than FWL(2), while $\delta$-k-LWL only lowers the time complexity of k-WL. **Reply to Q4.** If $k=2$ but $d$ increases, then the expressive power of d-DRFWL(2) strictly increases, as stated in Theorem 3.2 of our paper. The running time grows as $O(n\ \mathrm{deg}^{2d})$, where $n$ and $\mathrm{deg}$ are the number of nodes and the average degree, respectively. Generally, there is no simple relation between the value of $d$ and the classification accuracy, because the accuracy depends not only on the model expressiveness, but also data. If $k$ increases, the definition of "mutual distance" of a $k$-tuple for $k>2$ is not obvious. If the "mutual distance" within a k-tuple is defined as the maximal distance between nodes in it, then the resulting d-DRFWL(k) will form a similar expressiveness hierarchy both for increasing k and increasing d. The running time will grow as $O(n\ \mathrm{deg}^{kd})$. As above, no simple conclusion for classification accuracy can be drawn. **Reply to Other remarks** Thanks for the suggestion. We will fix it in our revision. **Reply to Limitations** We acknowledge that **Q2** has not been fully addressed. While we keep working on the comparison in expressiveness between $d$-DRFWL(2) GNN and other expressiveness hierarchies, we point out that this is not our major concern, as stated in the first point of our general response.
Summary: The paper deals with constructing expressive GNNs (i.e., more expressive than 1-WL) with low complexity. The authors focus on cycle count ability, motivated by their importance in real life chemical datasets. With this problem and motivation at hand, the authors propose a family of algorithms possessing a strict hierarchy property called $d$-distcne restricted FWL or $d$-DRFWL(2). They identify the ability to count cycles in the tuple based coloring and aggregation of the $2$-FWL algorithm compared to MPNNs which are bounded by $1$-WL and cannot even count a 3-cycle, but aim to avoid the $O(n^2)$ and $O(n^3)$ memory and time complexity respectively of $2$-FWL. To that end, they propose only coloring 2-tuples $(u,v)$ which are far from each other by ad most $d$ and propose a distance-dependent aggregation, this algorithm is named $d$-DRFWL(2). They then show that already for $d=2$ the proposed algorithm can count up to 6-cycles and that increasing $d$ strictly increases discriminative power. The proposed algorithm is the most efficient known model that can count 6-cycles, with time and memory complexities of $O(n\mathrm{deg}^4)$ and $O(n\mathrm{deg}^2)$, respectively. Strengths: The authors clearly present the problem, goal, and motivation for their work. On the originality and novelty side, the authors design the most efficient model to this date that can count 6-cycles and show a thorough analysis of one of its instances, the $2$-DRFWL(2). Due to the importance of cycle counts in chemical and biological structures, I think that this work is valuable to that community. Weaknesses: - Although taking a step towards more efficient and expressive GNNs, the efficiency of the method is only when graphs are sparse enough. To my understanding, the proposed method still did not break the limit of running on graphs of sizes $O(100)$ (e.g, ogbg-ppa dataset). - Do the authors have any explanation for the large performance gap between their method and $I^2$-GNN on the QM9 dataset? Theoretically, they both can count the same sizes of cycles, so this clouds the motivation for the paper in my opinion, since cycle counting may not be what makes this method more successful. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - Have the authors considered a baseline model where they add the cycle/substructure counts as features? Is the proposed method indeed better then the naive approach? - An ablation that I find missing is to see the empirical effect of $d$. Have the authors experimented with larger $d$ values? - How does the proposed hierarchy compare to LFWL(2) and SLFWL(2)? Due to this approaches being closely related, coloring 2-tuples in lower complexity, a more elaborate discussion would add to the work. Also, either theoretical or experimental comparison would be good. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors discuss the limitations of their work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your thoughtful and thorough comments. We respond to all points below. **Reply to W1.** We do agree with your comment that the efficiency of our model is only guaranteed when graphs are sparse. Still, our model can handle graphs with even $n\approx 500$ nodes, as long as the average degree $\mathrm{deg}\sim O(1)$, as is verified by our experiments on ProteinsDB and HomologyTAPE datasets. On graphs with $n$ nodes and average degree $\mathrm{deg}$, $d$-DRFWL(2) GNN uses $O(n\ \mathrm{deg}^d)$ space and $O(nd\ \mathrm{deg}^{2d})$ time to store and update the embeddings for distance-restricted 2-tuples. Therefore, it is in general $\mathrm{deg}$ instead of $n$ that has a larger impact on our model's efficiency. This fact accounts for the failure of $d$-DRFWL(2) GNN on ogbg-ppa, where the average degree is about 10. The search for efficient and powerful methods pertaining to dense graphs remains open. However, we also notice that practical molecule datasets (where cycle counting matters most) only have an average degree between 2 and 3, making our model feasible in most cases. **Reply to W2.** Thanks for the valuable question. Below we list the relative performance gain, computed by $$\frac{\text{I}^2\text{-GNN MAE}-\text{2-DRFWL(2) GNN MAE}}{\text{I}^2\text{-GNN MAE}}$$ |Target | Relative Performance Gain| |:---:|:---:| |$\mu$|0.192| |$\alpha$|0.035 |$\varepsilon_\text{homo}$|0.134| |$\varepsilon_\text{lumo}$|0.157| |$\Delta\varepsilon$|0.147| |$R^2$|0.193| |$\text{ZPVE}$|-0.214| |$U_0$|0.261| |$U$|0.257| |$H$|0.461| |$G$|0.402| |$C_v$|-0.234| We see that the largest relative performance gains of 2-DRFWL(2) GNN compared with I$^2$-GNN occur on targets $U_0,U,H,G$ (each with gain $\geqslant$ 25%). These targets are macroscopic thermodynamic properties of molecules. For such properties, inter-molecular interactions (such as hydrogen bonds), as well as interactions between molecules and the environment can be as important as interactions within molecules. Such interactions may occur between two atoms that are graph-theoretically distant (for example, between the “head” of a molecule and the “tail” of a nearby molecule). Therefore, subgraph extraction operation inhibits information from propagating between such atom pairs. Although our models are also local, they still allow long-range message passing by running for multiple iterations. Therefore, we suspect that it is our model's stronger ability to capture long-range interactions that leads to the observed performance gap with I$^2$-GNN. **Reply to Q1.** We have included "Graph Substructure Networks" (GSN) [a] as a baseline model on ZINC-12k and ogbg-molhiv datasets. GSN augments the graph with node-level and edge-level induced substructure counts, and then runs a message passing GNN. As is shown in the following tables, the performance of our model is either better than or comparable with GSN on the two benchmarks. | Method | ZINC-12k (MAE) | | :---: | :---: | |GSN| 0.115 $\pm$ 0.012 | |$d$-DRFWL(2) GNN | **0.077** $\pm$ 0.002 | | Method | ogbg-molhiv (ROC-AUC) | | :---: |:---:| |GSN (GIN+VN base)| 0.7799 $\pm$ 0.0100 | |GSN (DGN+substructures)| **0.8039** $\pm$ 0.0090| |$d$-DRFWL(2) GNN | 0.7818 $\pm$ 0.0219 | [a] Giorgos Bouritsas, Fabrizio Frasca, Stefanos Zafeiriou, and Michael M Bronstein. Improving graph neural network expressivity via subgraph isomorphism counting. IEEE Transactions on Pattern Analysis and Machine Intelligence, 45(1):657–668, 2022. **Reply to Q2.** Thanks for the question. We have included a comprehensive study of the effect of larger d values in the 2nd point of our general response. **Reply to Q3.** Thanks for the in-depth question. For any finite d, we can construct graph pairs that are separable by LFWL(2)/SLFWL(2) but not d-DRFWL(2). One of such graph pairs can be two $(3d+1)$-cycles and one $(6d+2)$-cycle, between which d-DRFWL(2) cannot discriminate. However, both LFWL(2) and SLFWL(2) can calculate distance between every pair of nodes, e.g. by simulating the Bellman-Ford shortest path algorithm. Therefore, it is easy for LFWL(2)/SLFWL(2) to distinguish between the aforementioned pair of graphs, since they have different diameters. This shows that **no d-DRFWL(2) with a fixed d value can be more powerful than LFWL(2)/SLFWL(2)**. On the other hand, for graphs with diameter $\leqslant d$, d-DRFWL(2) has equal power to FWL(2). Since it is shown in [a] that there exist graph pairs separable by FWL(2) but not LFWL(2)/SLFWL(2), once the value of $d$ grows larger than the diameter of such graph pairs, those graph pairs can be separated by d-DRFWL(2). This shows that **with sufficiently large d, the power of d-DRFWL(2) is neither stronger nor weaker than LFWL(2)/SLFWL(2)**. However, it remains underinvestigated the relation in discriminating power between LFWL(2)/SLFWL(2) and d-DRFWL(2) **with a practical d value**, such as d=2 or d=3. **Are d-DRFWL(2) with those smaller d values less powerful than LFWL(2)/SLFWL(2), or are they incomparable?** In an attempt to tackle this problem computationally, we implement in Python programs the construction procedure of the generalized Furer graph pairs described in [a]. It is claimed in [a] that the generalized Furer graph pairs constructed from Figure 9, 10, 11 of [a] cannot be separated by LFWL(2), while generalized Furer graph pairs constructed from Figure 10 of [a] cannot be separated by SLFWL(2). However, our experiments show that none of the mentioned generalized Furer graph pairs can be separated by d-DRFWL(2), with d=2 or d=3. Therefore, the question raised above remains open for future research. [a] B. Zhang, G. Feng, Y. Du, D. He, and L. Wang. A complete expressiveness hierarchy for subgraph gnns via subgraph weisfeiler-lehman tests. arXiv preprint arXiv:2302.07090, 2023. --- Rebuttal Comment 1.1: Comment: I thank the authors for their response. I recommend adding the discussion on **Q3** to the revised version as well as the discussion on $d>2$.. --- Reply to Comment 1.1.1: Comment: We thank the reviewer for thoroughly reading our responses and giving the constructive suggestions. We will include the discussion on **Q3** as well as the additional discussion on $d$-DRFWL(2) with $d>2$ in our revised version.
Summary: The paper focuses on the task of counting or detecting substructures such as cycles or small cliques in graphs. This task is (theoretically and practically) impossible for standard GNNs. While higher-order GNNs are able to count such substructures, they are slow, using $n^2$ space and $n^3$ time which is infeasible for large graphs. The paper suggests to make such higher-order GNNs practical by only considering "local" pairs of nodes (in the case of a 2-GNN) as substructures such as cycles are inherently local too. The paper proves that such distance-restricted higher-order GNNs are able to count substructures and in particular distance 2 and a 2-GNN suffice to count 6-cycles. Experiments underline the strong substructure counting power of the suggested method while still achieving reasonable runtimes (in contrast to full 2-GNN). Strengths: Strong cycle counting power without being extremely slow: finally we have an efficient GNN that can not only count triangles (i have seen a few of that) but the much more complicated (and possibly interesting) 6-cycles. This holds both in theory and in practice. The authors provide a clear theory (I have to admit that I did not check the proofs in the appendix) and extensive experiments focusing on the main claim, namely the strong cycle and in general substructure counting power. In terms of efficiency I was surprised that the practical memory consumption is not far from what a standard MPGNN uses (even though in the molecular domain the maximum degree in each graph is typically 3 or 4 and thus not extremely large). Overall a very nice paper! Weaknesses: I don't have strong criticisms about the paper (but please see the questions below). Here is a list of things I noted while reading the paper, roughly in the order of appearance: - 14: How is subgraph extraction avoided when using DRFWL? The DR-k-FWL procedure somehow needs to operate on small induced subgraphs. - 48: I would say it is not obvious why 2-FWL encodes closed walks. It would be nice to have some additional intuition there - 62: Many graphs have small diameter, for those the distance restriction does not help too much. And in particular, on the complete graph the distance restriction trivially is useless. The "deg" dependency is only introduced in the next paragraph such that the statement that DRFWL(2) should have a lower complexity than 2-FWL is unclear as this obviously does not hold for the worst-case complexity. - the related work might also include methods that break the 1-WL barrier by aggregating from multiple distances at once (those methods often call themselves "higher-order" in the sense that they use a "higher-order neighborhood", often 2 or 3 hops) - At Q4: please make explicit that this is a question to be answered empirically (the theory result is already there) - How about long-range dependencies? (https://arxiv.org/abs/2206.08164) - It would be nice to mention that the model is not suitable for node classification as the corresponding graphs typically have relatively large degrees (at least compared to molecules). Typography: - Leman asked to be written without the h. This is why some older papers still use the additional h while most newer ones do not. - It might look nicer to use \bm for the $d$ in section headings - I personally prefer it when equations are referenced as Eq. (6) instead of just (6). One way is to use \cref throughout the document - 173: the "one the other hand" seems out of context - It would be nice to have author names in addition to a citation [99] when [99] acts as the subject of that sentence. This happens several times in the related work section - 51: := \ell -> =: \ell (as \ell is to be defined) - 296: Similar -> A similar - Thanks a lot for proper proofreading! Technical Quality: 3 good Clarity: 3 good Questions for Authors: - There seems to be a subtle difference between equations 3,4 and equations 6,7. In particular 6,7 explicitly create $M_{i,j}$ for every distance pair $i,j$ while equation 3,4 just create a single multiset of sift operations (colors). Would you like to comment on that small difference? I have the feeling that this change may make the distance-restricted method (with d>diameter) stronger than pure FWL, although your theorem states that this is not the case - effectively the DR variant seems to individualizes one node. - is 2-DRFWL(2) well suited for long-range dependencies? https://arxiv.org/abs/2206.08164 - How does it perform on molpcba (which is more commonly evaluated against than molhiv)? - Is there a simple intuitive reason why FWL(2) cannot count more than 7-cycles? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors addressed the main limitation of the work, namely that it becomes a lot slower when the degree in a graph increases, limiting the applicability to mostly the molecular domain and keeping it from working in the context of node classification. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for acknowledging our contributions as well as providing the insightful suggestions. **Reply to W1.** $d$-DRFWL(2) GNNs process a graph G in two steps: (i) extract all 2-tuples $(u,v)\in\mathcal{V}_G^2$ that satisfy $d(u,v)\leqslant d$; (ii) initialize and update the embeddings of all such 2-tuples using the update rules (equations 10 and 11 of the paper). In the above procedure, the message passing between 2-tuples takes place directly on the entire graph G, without the need to extract subgraphs from G and view G as a "bag of subgraphs". Moreover, messages from any 2-tuple (u, v) will be propagated to the entire graph after a sufficiently large number of iterations. In both senses DRFWL(2) GNNs avoid extracting subgraphs. **Reply to W2.** Briefly speaking, the update rule of FWL(2) allows each 2-tuple (u,v) to be aware of the information from another two 2-tuples (u,w) and (w,v), for an arbitrary w. Now, let us assume that we have *somehow* trained an FWL(2) GNN such that for every 2-tuple (u,v), its embedding contains $\left(W_{d_1}(u,v), W_{d_2}(u,v), W_{d_3}(u,v)\right)$, where $W_k(u,v)$ means the number of k-walks starting from u and ending at v. Then, the update rule of FWL(2) would allow the following value to be calculated: $$W_{d_1}(u,v)\cdot\sum_{w\in\mathcal{V}_ G} W_{d_2}(u,w)W_{d_3}(w,v)$$ If further summed over different $(d_1, d_2, d_3)$ combinations with fixed $l=d_1+d_2+d_3$, this yields the number of closed $l$-walks that pass u and v. In this sense, FWL(2) can encode closed walks, as long as information of shorter walks can be obtained. Since shorter walks are easy to capture by a few FWL(2) updates, our claim holds. **Reply to W3.** We acknowledge that for graphs with small diameters (i.e., diameter $\lesssim d$), the distance restriction of $d$-DRFWL(2) helps little in reducing computational cost. However, such graphs either have few nodes or are very dense (e.g. complete graphs). For the first case, the efficiency is usually not a major concern. The second case hardly happens since real-world graphs are usually sparse. We will explicitly state our assumption on sparsity in our revision, making the "deg" dependency clearer. **Reply to W4.** Thanks for the suggestion. We include a discussion on those methods here. As proposed in [a], such methods are often called K-hop message passing GNNs (K-hop MPNNs). Like our method, K-hop MPNNs also explicitly leverage distances to control message passing, thus making use of the sparsity of the graphs. Their discriminating power is also between WL(1) and FWL(2). However, a major deficiency of K-hop MPNNs is that messages are passed between **nodes** instead of **node pairs**. As is discussed in our paper, such kind of message passing is **incapable of detecting substructures, especially cycles**. Indeed, as shown in Figure 1 of [a], the pair of graphs in either Example 1 or Example 2 contain different numbers of 3-cycles; however, K-hop MPNNs with either shortest path distance kernel or graph diffusion kernel fails to distinguish between both graph pairs, implying that vanilla K-hop MPNNs **fail to graph-level count even the simplest 3-cycles**. In contrast, with d=2, d-DRFWL(2) GNNs can already node-level count up to 6-cycles. [a] Feng, J. & Chen, Y. & Li, F. & Sarkar, A. & Zhang, M. (2022). How Powerful are K-hop Message Passing Graph Neural Networks. 10.48550/arXiv.2205.13328. **Reply to W5 & W7.** Thanks for the suggestions. We will clarify both points in our revision. **Reply to W6.** Thanks for the question. We evaluate 2-DRFWL(2) GNN on the two graph-level property prediction datasets, `Peptides-func` and `Peptides-struct` of Long Range Graph Benchmarks (LRGB). For experiments on both datasets, we use 5 2-DRFWL(2) GNN layers with hidden size 120. We train 200 epochs using Adam optimizer with initial learning rate 0.001, and plateau scheduler with patience 10, decay rate 0.5, and minimal learning rate 1e-5. The batch size is 64. The results are shown in Table 4 of the PDF. From the table, we see that our method outperforms almost all message passing GNNs. Surprisingly, on `Peptides-struct` the performance of our model is **even comparable to Transformer-based models** (which inherently capture long-range information). Therefore, we conclude that although designed to capture local structural information, our model still learns long-range dependencies well. **Reply to Typography.** Thanks a lot for the meticulous checking! We will update them all in the revision. **Reply to Questions.** Due to space limit, we only provide brief responses to your questions. We are willing to append the full responses during the author-reviewer discussion. *Q1:* The fact is that with d > diameter, d-DRFWL(2) is **as powerful as** the FWL(2) test. The key is that FWL(2) can also calculate the distance between every node pair $(u,v)$, after which it can store the distance as an additional marking for the 2-tuple. Now it is easier to see the equivalence with equations 6,7, as long as d > diameter. *Q2:* Please see **W6**. *Q3:* The result of 2-DRFWL(2) GNN on molpcba is as follows. |Method | ogbg-molpcba (AP)| |:---:| :---:| | GIN+virtual node | 0.2703 $\pm$ 0.0023| | Nested GIN+virtual node | **0.2832** $\pm$ 0.0041 | | 2-DRFWL(2) GNN (ours) | 0.2076 $\pm$ 0.0026 | We see that our model performs poorly on ogbg-molpcba, which we have no proper explanation currently. While we will keep on fine-tuning the hyperparameters. We point out that none of the known higher-order GNNs (or their sparsified variants) have ever appeared on the leaderboard of ogbg-molpcba, indicating that ogbg-molpcba might be particularly hard to optimize or generalize for higher-order GNNs (including ours). *Q4:* The simplest answer is that Shrikhande graph and `4*4` Rook's graph (a well-known pair of FWL(2) inseparable graphs) have different numbers of 8~16 cycles. The intuitive explanation is deferred to author-reviewer discussion. --- Rebuttal Comment 1.1: Title: Author Response to Reviewer 9V2p, continued Comment: Due to space limit, only brief responses are given to the questions posed by Reviewer 9V2p. We now append the full responses to those questions below. **Reply to Q1.** The fact is that with d greater than the diameter, d-DRFWL(2) is as powerful as the FWL(2) test. The key point is that FWL(2) can also individually deal with different distance pairs $i,j$, as in equations 6, 7. As stated in lines 174--176 of our paper, by a few FWL(2) iterations one can calculate the distance $d(u,v)$ between every pair of nodes $(u,v)$. Due to this fact, we can design an FWL(2) instance, such that it first computes distance between every pair of nodes, and then stores the distance $d(u,v)$ as an additional marking for the 2-tuple $(u,v)$. After executing the above operations, the contributions from each distance pairs $i,j$ are naturally disjoint, because $i,j$ has been labeled to $W^{(t-1)}(u,w), W^{(t-1)}(w,v)$ respectively. This way, it is easier to see the equivalence with equations 6, 7, as long as $d>$ diameter. **Reply to Q2.** See our response to **W6**. **Reply to Q3.** Thanks for the question. We evaluate 2-DRFWL(2) GNN on molpcba. We tune the hyperparameters by randomly searching the following space with four trials: (i) hidden size $\in[100,300]$; (ii) number of layers $\in[3,5]$; (iii) learning rate $\in[0.001, 0.008]$; (iv) dropout rate $\in\{0.2,0.5\}$; (v) plateau scheduler patience $\in\{10,20\}$; (vi) plateau scheduler decay $\in\{0.5, 0.8\}$. We also apply layer normalization after each 2-DRFWL(2) GNN layer. We train for 150 epochs with batch size 256. The best 4-runs result among all trials is already presented in our rebuttal. Our model performs poorly on ogbg-molpcba, which we have no proper explanation currently. While we will keep on fine-tuning the hyperparameters, we point out that none of the known higher-order GNNs (or their sparsified variants) have ever appeared on the leaderboard of ogbg-molpcba, indicating that ogbg-molpcba might be particularly hard to optimize or generalize on for higher-order GNNs (including ours). **Reply to Q4.** Thanks for the question. The simplest answer is that Shrikhande graph and `4*4` Rook's graph (a well-known pair of FWL(2) inseparable graphs) have different numbers of 8~16 cycles. The counterexamples for longer cycles are constructed in [a]. However, this proof lacks intuition. Nevertheless, intuitive explanation may be given for why our technique in proving the ability of FWL(2) to count 6 or 7-cycles fails on 8-cycles. We notice that to count a longer cycle, we always first split it into two distinct paths whose counts we can obtain. For example, a 6-cycle is splitted into a 2-path and a 4-path, with 2, 4-paths both known countable by FWL(2). Now if we want to count 8-cycles, we obviously should split an 8-cycle into two 4-paths. The next step is to enumerate all cases in which some of the nodes on the two paths coincide. For convenience, say the two 4-paths both start from u and end at v. Consider a coinciding case where the two 4-paths are of the form u->x->y->z->v and u->x->b->c->v. To count such case, one must count 6-cycles like x->y->z->v->c->b->x for **any** pair of x and v. Performing this count further requires the number of x->y->z->v->y->z->x, which appears as a coinciding case for counting the 6-cycles for x and v. However, this is generally **not countable by FWL(2)**, since (x,v) not only needs to know (x,y), (y,v) or (x,z), (z,v), but also has to assure that y and z are neighbors. We believe similar intuitions may be provided for longer cycles. [a] V. Arvind, F. Fuhlbrück, J. Köbler, and O. Verbitsky. On weisfeiler-leman invariance: Subgraph counts and related graph properties. Journal of Computer and System Sciences,
Summary: In this paper, the authors introduce the d-DRFWL(2) GNNs to count certain graph substructures, including cycles, cliques, and paths, which is essential for various graph-mining tasks. The authors prove that with d = 2, d-DRFWL(2) GNNs can already count up to 6 cycles, retaining most of the cycle counting power of FWL(2). Since the d-DRFWL(2) GNNs restrict the distance of message propagation, they are much more efficient than other GNN models. Experiments on both synthetic datasets and real molecular datasets are consistent with the theoretical results. Strengths: S1. The experiment is conducted on both real and synthetic datasets, where the results are convincing. S2. The theoretical results are sound and easy to follow. S3. The paper is clear and easy to follow. Weaknesses: W1. In my opinion, the scope of this particular line of research, which examines the expressive capabilities of GNNs by quantifying subgraph structures, appears to be rather limited. As stated in [1], GNNs are Turing complete (LOCAL algorithm) under certain conditions, indicating that they can effectively address any graph mining task with the same power as traditional algorithms. Consequently, this line of research primarily relies on observations made in previous graph mining algorithms and attempts to reinterpret them in the framework of GNNs, albeit within specific model designs, i.e., data structure for processing the graph. It would be beneficial if the authors could provide a comprehensive explanation detailing how their approach represents an advancement in utilizing GNNs to solve the subgraph counting problem, surpassing conventional algorithms [2]. W2. Although the authors demonstrate that 2-DRFWL(2) can count 6-cycles, I am curious about the ability of 3-DRFWL(2) (or even larger $d$) to solve the cycle counting problem with larger subgraph sizes. If possible, it would be essential to understand the relationship between the minimal distance and the subgraph size. Additionally, exploring the impact of different values of K in the K-FWL test on the expressive power of GNNs would be interesting. [1] Loukas, A. (2019, September). What graph neural networks cannot learn: depth vs width. In International Conference on Learning Representations.\ [2] Ribeiro, P., Paredes, P., Silva, M. E., Aparicio, D., & Silva, F. (2021). A survey on subgraph counting: concepts, algorithms, and applications to network motifs and graphlets. ACM Computing Surveys (CSUR), 54(2), 1-36. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: Q1. There are other essential subgraph structure such as k-plex and k-core. I wonder whether the proposed framework can count this structures. Q2. The authors propose the existence of a clear hierarchy of expressiveness for d-DRFWL(2) GNNs as the value of $d$ increases. However, no experiments have been conducted to verify this claim across different values of $d$. Therefore, I would like to suggest the authors include a sensitivity test to support their theoretical findings. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: Please check the weaknesses and questions. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your insightful comments and suggestions. We respond to all the weaknesses and questions below. **Reply to W1.** We thank the reviewer for providing relevant works [1][2]. Although [1] claims that message passing GNNs (MP-GNNs) can be Turing complete, the assertion is made under fairly strong assumptions, such as requiring **a unique identity labeling given to each node**, which explicitly breaks the permutation symmetry. As explained in Appendix D of [1], once we remove the node identity labeling, MP-GNNs are no longer Turing universal. Actually, MP-GNNs without node identity labeling are no more powerful than the WL(1) test (as stated in [a]), and cannot count any cycles. This fact indicates that they fail to **causally** recognize graph substructures. In many real-world tasks, such as molecular property prediction, it is impossible to assign unique labels to nodes (for example, it is hard to consistently determine which node should be labeled as "0"-th in a benzene ring); instead, only models that respect permutation symmetry are considered. As is verified by our theory and experiments, with $d=2$, $d$-DRFWL(2) GNNs do have an outstanding substructure counting power compared with other permutation-symmetric models. There exist works [b, c] that assign unique random features/labels to nodes (breaking permutation symmetry), but they all have bad generalization performance on large real-world datasets since even two isomorphic graphs may have different representations using random features. [2] surveys traditional algorithms addressing the substructure counting problem. Different from our method, they are generally not data-driven. As we are to point out below, these traditional methods are **not directly comparable** with our method. This is because the ultimate goal of designing more powerful GNNs is to **tackle the traditionally intractable tasks in a data-driven manner**. Therefore, those GNN models (including ours) are not tailored to **solely** solving the substructure counting problem. Indeed, if certain substructure counts (or known functions of them) are the only targets, one should always apply traditional approaches instead of training a GNN, since traditional approaches avoid expensive training and have better accuracy and explainability. However, if the task is intractable by traditional approaches (e.g. molecular property prediction), then GNNs with strong cycle counting power (like ours) can work well. Therefore, the traditional methods and our model **differ in their domains of application**, and should not be directly compared. Despite the above discussion, substructure counting power remains a practical metric to evaluate a GNN's expressiveness, as stated in the 1st point in our general response. It is from this perspective should our main contribution (proposing efficient GNNs with provable cycle counting power) be understood. [a] K. Xu, W. Hu, J. Leskovec, and S. Jegelka. How powerful are graph neural networks? In ICLR, 2018. [b] Abboud, Ralph, et al. "The surprising power of graph neural networks with random node initialization." arXiv preprint arXiv:2010.01179 (2020). [c] Sato, Ryoma, Makoto Yamada, and Hisashi Kashima. "Random features strengthen graph neural networks." SDM 2021. **Reply to W2.** Thanks for suggestion. As stated in the 2nd point of our general response, we have proved that increasing d from 2 to 3 makes d-DRFWL(2) able to count 7-cycles. However, further increasing d to $d\geqslant 4$ does not bring about stronger cycle counting power since even FWL(2) cannot graph-level count more-than-8-cycles. As for the distance-restricted version of FWL(k) with k>2, the meaning of "mutual distance" within k-tuples (k>2) is somewhat opaque. If the "mutual distance" within a k-tuple is defined as the maximal distance between nodes in it, then we believe that the resulting d-DRFWL(k) will form a similar expressiveness hierarchy with increasing d; moreover, their expressive power is similarly upper-bounded by FWL(k). However, since the cycle counting power of FWL(k) with k>2 is still open, the cycle counting power of d-DRFWL(k) defined above remains open. **Reply to Q1.** Thanks for raising the valuable questions. We first clarify the definitions of k-plexes and k-cores as following: * A k-plex in G is an induced subgraph of G in which every node has degree at least $|S|-k$, where S is the node set of the subgraph * A k-core in G is an induced subgraph of G in which every node has degree at least k For k-plexes, it is an NP-complete problem even to find the maximal k-plex of a graph G. [a] Therefore, it is generally impossible for our model to count k-plexes. Indeed, if k=1, as 1-plexes are simply cliques, our model fails to count 1-plexes with size larger than 4. For k-cores, notice that a k-core can have arbitrarily large diameter, which may make it "beyond the reach" of d-DRFWL(2) GNN. For example, if k=2, then any induced cycle of G is a 2-core in G. Taking the example of **G being two $(3d+1)$-cycles and H being a single $(6d+2)$-cycle** (between which d-DRFWL(2) GNNs fail to discriminate), we see that G has two 2-cores while H has one. Therefore, d-DRFWL(2) GNNs cannot count 2-cores for any d. We believe that counterexamples for larger k can be similarly constructed, by considering other regular graph pairs with large diameters. Summarizing the above discussion, the answer to **Q1** is generally negative. [a] Balasundaram, Balabhaskar & Butenko, Sergiy & Hicks, Illya V. Hicks. (2011) Clique Relaxations in Social Network Analysis: The Maximum k-Plex Problem. Operations Research. 59. 133-142. 10.1287/opre.1100.0851. **Reply to Q2.** Thanks for the constructive suggestion. We have included in the 2nd point of the "general response to all reviewers" a thorough experimental study of $d$-DRFWL(2) GNNs with two $d$ values: $d=2$ and $d=3$. The experimental results should have provided strong support to our theory. --- Rebuttal Comment 1.1: Comment: I think the authors have clarified most of my concerns. I would be happy to adjust my rating. --- Reply to Comment 1.1.1: Comment: We thank the reviewer for diligently reading our responses and giving the affirmative feedbacks.
Rebuttal 1: Rebuttal: We thank all the reviewers for their insightful feedbacks and constructive suggestions. All the comments have been scrupulously considered, and we will integrate the suggestions into our revised version of the paper. To address the common concerns of the reviewers, below we restate the focus of our paper by answering a related question, and present our additional results obtained during the rebuttal period. **1. Why to use cycle counting power as a metric for expressiveness of GNNs?** Many reviewers raise the questions about how $d$-DRFWL(2) GNNs fit into other expressiveness hierarchies, such as $k$-WL [a], $\delta$-$k$-WL [b] and LFWL(2)/SLFWL(2) [c]. Although these questions are meaningful themselves, they deviate from our focus. The major concern of our paper is **to propose efficient GNNs with provably strong cycle counting power**. Therefore, cycle counting power is chosen as a metric to evaluate the expressiveness of GNNs. There are two practical reasons to use this metric instead of (the more widely used) discriminating power: * Many real-world tasks (such as molecular property prediction) rely heavily on cycle counts. Therefore, GNNs with provable cycle counting power can capture important inductive bias closely related to such tasks, while GNNs that are incapable of counting cycles inevitably fail to make causal predictions on such tasks * For many of the known expressiveness hierarchies such the WL hierarchy [a], the separation in expressiveness is often based on delicately crafted counterexamples, which provide little insight into the expressiveness exhibited in practical tasks. In contrast, cycle counting power remains a useful metric when we need to evaluate the practical expressiveness of a model on given datasets. **2. Additional studies on $d$-DRFWL(2) GNNs with $d>2$.** As pointed out by many reviewers, currently our paper only contains extensive studies on the cycle counting power of 2-DRFWL(2) GNNs. There is no investigation on the cycle counting power of $d$-DRFWL(2) GNNs with larger $d$, nor any experimental comparison between 2-DRFWL(2) GNNs and $d$-DRFWL(2) GNNs with larger $d$ values. During the rebuttal period, we have thoroughly studied, both theoretically and empirically, the discriminating power (related to **Q4** of our paper) and node-level cycle counting power of $d$-DRFWL(2) GNNs, with $d>2$. Regarding the discriminating power of $d$-DRFWL(2) GNNs with larger $d$ values, we run 3-DRFWL(2) GNN on two synthetic datasets: (i) EXP and (ii) SR25. The result is shown in Table 1 of the PDF. We can see that (i) 3-DRFWL(2) GNN discriminates between **all graph pairs on EXP**, including those not separable by 2-DRFWL(2) GNN, verifying **the strict expressiveness gap between 2-DRFWL(2) GNN and 3-DRFWL(2) GNN**; (ii) 3-DRFWL(2) GNN fails on SR25 dataset with accuracy 6.67%. Regarding the node-level cycle counting power, we have **fully unveiled** the theoretical cycle counting power for $d$-DRFWL(2) with **arbitrary** $d\geqslant 2$: * 2-DRFWL(2) GNNs can node-level count 3, 4, 5, 6-cycles but cannot graph-level count 7-cycles (already proved in our paper) * $d$-DRFWL(2) GNNs with arbitrary $d\geqslant 3$ can node-level count 3, 4, 5, 6, 7-cycles but cannot graph-level count 8-cycles (**newly added**) A proof sketch of the second bullet point is given below. The negative result follows from the fact that $d$-DRFWL(2) GNNs are less powerful than FWL(2) and that FWL(2) cannot graph-level count 8-cycles [e]. For the positive result, it suffices to prove that 3-DRFWL(2) GNNs can node-level count 7-cycles. To calculate the count of 7-cycles, we first decompose a 7-cycle into a 3-path and a 4-path, both of which can be counted by 3-DRFWL(2) GNNs. Then, we enumerate all possible cases in which some of the nodes in the 3-path and the 4-path coincide. There are 12 cases in total, all of them shown in Figure 1 of the PDF. By examining each kind of substructure, we confirm that all of them can be counted by 3-DRFWL(2) GNNs, leading to the positive result. To verify our theory, we conduct node-level cycle counting experiments on the Substructure Counting dataset, for both 2-DRFWL(2) GNN and 3-DRFWL(2) GNN. Particularly, we add the task of node-level counting 7-cycles. The result is shown in Table 3 of the PDF. We see that on the task of counting 3, 4, 5 and 6-cycles, 3-DRFWL(2) GNN achieves comparable performance with 2-DRFWL(2) GNN; on the task of counting 7-cycles, 3-DRFWL(2) GNN greatly outperforms 2-DRFWL(2) GNN, validating our theory. We point out that the fact that 3-DRFWL(2) GNNs can node-level count 7-cycles **confirms the existence of GNNs with the same cycle-counting power as FWL(2) but strictly lower complexity**. We further investigate the real-world performance of 3-DRFWL(2) GNNs by conducting ablation studies on the ZINC-12K dataset. The result is shown in Table 2 of the PDF. It is worth noticing that by only introducing part of the aggregations related to distance-3 tuples, a great performance gain can be observed. [a] J. Cai, M. Fürer, and N. Immerman. An optimal lower bound on the number of variables for graph identification. Combinatorica, 12(4):389–410, 1992. [b] C. Morris, G. Rattan, and P. Mutzel. Weisfeiler and leman go sparse: Towards scalable higher-order graph embeddings. Advances in Neural Information Processing Systems, 33:21824–21840, 2020. [c] B. Zhang, G. Feng, Y. Du, D. He, and L. Wang. A complete expressiveness hierarchy for subgraph gnns via subgraph weisfeiler-lehman tests. arXiv preprint arXiv:2302.07090, 2023. [d] Ribeiro, P., Paredes, P., Silva, M. E., Aparicio, D., & Silva, F. (2021). A survey on subgraph counting: concepts, algorithms, and applications to network motifs and graphlets. ACM Computing Surveys (CSUR), 54(2), 1-36. [e] V. Arvind, F. Fuhlbrück, J. Köbler, and O. Verbitsky. On weisfeiler-leman invariance: Subgraph counts and related graph properties. Journal of Computer and System Sciences, 113:42–59, 2020. Pdf: /pdf/054b69e0c0b531b226d4c150564bc9d4c0735e6e.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: This work explores the ability of graph neural network to count certain graph substructures, especially cycles. While past works counts by collecting subgraphs, the work avoids such burdensome procedure and construct a local method. Theoretical analysis are presented to show that Folklore Weisfeiler-Lehman test. Accordingly, the work proposes d-Distance-Restricted FWL(2) GNNs (FWL(2) being 2-dimensional Folklore Weisfeiler-Lehman), which incorporates the Folklore Weisfeiler-Lehman test algorithm into the aggregation steps of a GNN. Subsequent theoretical analysis and experiemental evaluations of d-Distance-Restricted FWL(2) GNNs are provided. In particular, experiment results show that the proposed GNN shows superior results on counting cycles or other structures (in synthetic datasets) and better performance graph regression task on chemical datasets, as well as being empirically more efficient. Strengths: - The problem of analyzing the ability of GNN to count substructures on a graph is interesting. Intuitively, counting cycles in a graph is non-intuitive for node-centric algorithm, where the operations solely performed on each node which only receives information from its neighbors at each iteration. - In light of above, the observation that counting cycles remains intrinsically \emph{local} is a good and noteworthy observation. - The technical content is good and supported by the experiments. Weaknesses: ### Neglected baseline comparisons with other higher-order GNNs As explained by the author, [35] and [46] presents GNNs utilizing node tuples for better expressivity. As the adaptation of FWL(2) is the main component of this work, it is concerning that [35] and [46] are not included in the baseline comparisons. Since FWL is adopted into GNN to improve over sugraph GNNs, other (representative) works on higher-order GNNs should also be compared. As it stands, there is no information in the paper on how this work compares to other higher-order GNNs. Does the proposed work have better performance and better efficiency? Does the proposed work trade-off performance for efficiency and space? We do not know. In continuation of above, as [35] is a highly cited paper, further searching founds [a, b, c] that also mentions cycle counting in their paper, but is not discussed or mentioned in this work. - [a] A New Perspective on "How Graph Neural Networks Go Beyond Weisfeiler-Lehman?" ICLR 2022 - [b] A Practical, Progressively-Expressive GNN. NeurIPS 2022 - [c] Graph Neural Networks with Local Graph Parameters. NeurIPS 2021 ### Unanswered questions: ablation and trade-off for larger d While the work presents d-DRFWL(2) GNNs, the theoretical analysis and experiments are all limited to d=2. Furthermore, Q4 is not satisfyingly answered, as the experiment results are very close. Why not use a sufficiently large d to represent the FWL(2) test? ### There are some places that are unclear and could be improved by providing more explanation: - FWL is left unexplained in the abstract. The paper may improve its readability by briefly stating FWL is a 2-tuple coloring scheme that solves graph isomorphism tests, whereas this work proposes to limit the scope of 2-tuples from all node pairs to only those within d-distance. - (line 108), "divided by a factor only depending on S." Isn't the factor simply the number of nodes in S? - The syntax of using double curly brakets {{ }} for multisets should be noted when first appeared in Equation (1) - (line 125) the meaning of rightarrow with a bar ($\mapsto$) is not clear. - The meaning (or depiction) of the twelve targets of QM9 dataset is not provided in the main paper or the appendix Technical Quality: 3 good Clarity: 3 good Questions for Authors: Can experiments be provided for comparison between the proposed method with [35] and [46] for performance and empirical efficiency? Can experiments and analysis be provided for $d\geq 2$? Can intuitive explanations be provided for each proof and concepts (currently, only 6-cyle is provided with an intuition)? - For example, it seems that the expressivity of up to 4-path and 6-cycle is correlated, as the furtherest node from a certain node in a 6-cycle requires a 4-path back get back to the certain node. Besides, is the 4-path attained by a pair of 2-tuples? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: It is provided. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your insightful comments and thorough investigation of additional related works. We respond to all the weaknesses and questions below. **Reply to W1.** Thanks for introducing the related works and offering the suggestion. To compare the performance and efficiency of our model with the ones proposed in [35], we implement GNNs based on **SSWL, SSWL+, LFWL(2) and SLFWL(2)** (all of which are proposed in [35]), and run them on four real-world datasets (in the order of increasing graph size): (i) QM9, (ii) ogbg-molhiv, (iii) HomologyTAPE and (iv) ProteinsDB. Details of the datasets are already present in our paper. For all experiments, we use a dense $B\times d\times n\times n$ matrix to store the embeddings of all 2-tuples within a batch, where $B$ is the batch size, $d$ is the size of the embedding dimension, and $n$ is the maximal number of nodes among all graphs in the batch. On QM9 dataset, we use 5 layers with hidden size 64 for each model. We train for 400 epochs using Adam optimizer with initial learning rate 0.001, and plateau scheduler with patience 10, decay factor 0.9 and minimum learning rate 1e-5. The batch size is 64. Due to time and resource limitation, we compare the four methods with ours only on the first target (i.e., dipole moment $\mu$). The result is shown below. |Method| QM9 ( $\mathrm{MAE}_\mu$ ) | |:---:|:---:| |SSWL|0.438| |SSWL+|0.421| |LFWL(2)|0.439| |SLFWL(2)|0.435 | |2-DRFWL(2) (ours)|**0.346**| We see that on target $\mu$, our model outperforms all four methods proposed in [35]. We also compare the efficiency of our model with the four methods. The used metrics follow our paper. The results are shown below. |Method | Memory (GB) | Preprocess (s) | Train (s/epoch)| |:---:|:---:|:---:|:---:| |SSWL|3.97| 64|88.5| |SSWL+|4.39|64|86.4| |LFWL(2)|3.17|64|58.0| |SLFWL(2)|3.97|64|82.8| |2-DRFWL(2) (ours)|2.31|430|141.9| Our method has the best memory efficiency compared with all four methods proposed in [35]. However, both the preprocessing time and training time of our model are longer. This is because * The four methods proposed in [35] do not need preprocessing, but instead build dense representation matrices on the fly. * The graphs in QM9 are rather small (~18 nodes per graph). On graphs of such small scale, the advantage of our model's lower time complexity are greatly offset by the constant overhead brought about by scatter operation on sparse matrices; in contrast, the four methods proposed in [35] use dense matrix multiplication and `1*1` convolution, which are more optimized operations on GPU On ogbg-molhiv, HomologyTAPE and ProteinsDB, with the same hyperparameters as those chosen in Appendix E.3 & E.4, all of the four methods run out of GPU memory, indicating their inferior memory efficiency on large graphs. In summary, our experimental studies show that 2-DRFWL(2) GNNs usually have better efficiency and scalability than GNNs based on SSWL, SSWL+, LFWL(2) or SLFWL(2) except on fairly small graphs. Additionally, it is often easier for 2-DRFWL(2) GNNs to capture the inductive bias suitable for molecular tasks. Next, we analyze the methods proposed in [46]. [46] proposes $\delta$-$k$-GNN and $\delta$-$k$-LGNN, which are also among the class of sparsified higher-order GNNs. For the case of $k=2$, both $\delta$-2-GNN and $\delta$-2-LGNN are encompassed by the framework of SSWL-GNN in [35]. This is because $\delta$-2-GNN uses aggregation schemes $agg_{uv}^P$, $agg_u^L$, $agg_v^L$, $agg_u^G$ and $agg_v^G$, while $\delta$-2-LGNN uses $agg_{uv}^P$, $agg_u^L$ and $agg_v^L$. Therefore, **the study of $\delta$-2-GNN and $\delta$-2-LGNN can be subsumed into our above study of SSWL-GNN**. We do not discuss the case of larger k since both $\delta$-k-GNN and $\delta$-k-LGNN would be overly expensive. The above experiments and analysis should address the questions concerning a comparison with other higher-order GNNs, since [35] and [46] are among the most popular methods within this framework. As for [a, b, c], we include a discussion below. * [a] proposes GraphSNN, which uses heuristics based on 1-hop subgraphs around nodes as well as their overlap subgraphs to enhance message passing. By its construction, GraphSNN can count 3-cycles, but fails to capture cycles longer than 4. * [b] proposes ($k$,$c$)($\leqslant$)-SetGNNs, which greatly reduce the effective number of $k$-tuples in WL($k$) by (i) removing node order, (ii) removing duplicate nodes, and (iii) restricting the number of connected components. Like $d$-DRFWL(2), this work leverages **locality** to reduce storage. Nevertheless, there is **currently no theoretical guarantee** on their cycle counting power. * [c] proposes $\mathcal{F}$-MPNNs, augmenting message passing GNNs with homomorphism counts. This method resembles GSN [ref1], and inherently encodes substructure counts. However, it is rather different from our method since it uses hand-crafted features. [ref1] G. Bouritsas, F. Frasca, S. Zafeiriou, and M. M. Bronstein. Improving graph neural network expressivity via subgraph isomorphism counting. IEEE Transactions on Pattern Analysis and Machine Intelligence, 45(1):657–668, 2022. **Reply for W2.** Thanks for the constructive advice. We have included a comprehensive study on larger d in the 2nd point of our general response. Due to space limit, we have to defer our response to the other weaknesses/questions in the author-reviewer discussion phase. We are sorry for the inconvenience, and will add the remaining response immediately when the discussion period begins. --- Rebuttal 2: Title: Author Response to Reviewer 8bs4, continued Comment: We now respond to the other weaknesses/questions posed by Reviewer 8bs4. Those responses are not included in the our rebuttal due to space limit. **Reply for W3, point 1** Thanks for suggestion. We will add a brief introduction in our revision. **Reply for W3, point 2** Not always, since for some substructures S in which not all nodes are structurally equal (such as tailed triangles or chordal cycles), our definitions for C(S,u,G) (see Definitions 4.1 and 4.2) require the node u to lie only at some special position(s) in S. In such cases, the factor is the number of nodes at "legal positions" in S. **Reply for W3, point 3, 4, 5** Thanks for suggestion. We will add annotations to clarify these points in our revision. By the way, one can refer to, e.g. the PyTorch Geometric documentation of QM9 dataset for the meaning of the twelve targets. **Reply for Q1.** See our response to **W1**. **Reply for Q2.** See our response to **W2**. **Reply for Q3.** The observation made in the bullet point is correct. Intuitively, in FWL(2) (or its distance-restricted version) if there is a $d_1$ walk from u to v, a $d_2$-walk from u to w and a $d_3$-walk from w to v, then the algorithm can detect a closed $(d_1+d_2+d_3)$-walk (and also a $(d_2+d_3)$-walk) that passes $u$ and $v$, since the update rule concatenates $(u,w)$ and $(w,v)$. This intuition works for all our theorems regarding path and cycle counts. For example, * **Counting 2-paths**: Combining two 1-walk * **Counting 3-paths**: Combining a 2-walk and a 1-walk * **Counting 4-paths**: Combining two 2-walks * **Counting 3, 4, 5-cycles**: These tasks reduce to counting 2, 3, 4-paths. * **Counting 6-cycles**: Combining three 2-walks --- Rebuttal Comment 2.1: Comment: Thank you. I am satisfied with your explanations and have raised the score to weak accept. --- Reply to Comment 2.1.1: Comment: We thank the reviewer for thoroughly reading our responses and giving the affirmative feedbacks. By the way, we now provide **full** QM9 results (on all 12 tasks) of GNNs based on SSWL, SSWL+, LFWL(2) and SLFWL(2). For each model, we consistently use 5 layers with hidden dimension size 64, and apply layer normalization after each layer. We train for 400 epochs using Adam optimizer. The initial learning rate is searched from {0.001, 0.002}. We also use plateau scheduler with patience 10, decay factor 0.9 and minimum learning rate 1e-5. The batch size is 64. We report the best MAE (mean absolute error) result for each task. |Target|SSWL|SSWL+|LFWL(2) |SLFWL(2) | 2-DRFWL(2) (ours)| |:---:|:---:|:---:|:---:|:---:|:---:| |$\mu$|0.438| 0.418 |0.439|0.435|**0.346**| |$\alpha$|0.294|0.271|0.315|0.289|**0.222**| |$\varepsilon_\mathrm{homo}$|0.00302|0.00298|0.00332|0.00308|**0.00226**| |$\varepsilon_\mathrm{lumo}$|0.00318|0.00291|0.00332|0.00322|**0.00225**| |$\Delta\varepsilon$|0.00427|0.00414|0.00455|0.00447|**0.00324**| |$R^2$|19.31|18.36|19.10|18.80|**15.04**| |$\mathrm{ZPVE}$|0.00021|0.00020|0.00022|0.00020|**0.00017**| |$U_0$|0.151|0.110|0.144|**0.083**|0.156| |$U$|0.163|**0.106**|0.143|0.121|0.153| |$H$|0.143|**0.120**|0.164|0.124|0.145| |$G$|0.158|0.115|0.164|**0.103**|0.156| |$C_v$|0.1138|0.1083|0.1192|0.1167|**0.0901**| We see that 2-DRFWL(2) GNN greatly outperforms SSWL, SSWL+, LFWL(2) and SLFWL(2) on all tasks **except $U_0,U,H$ and $G$**. On those four tasks, the performance of SSWL+ and SLFWL(2) is much better. We may give an explanation to the phenomenon with some knowledge of the underlying physics. All four targets, $U_0,U,H$ and $G$, are macroscopic thermodynamic properties of molecules. For such properties, inter-molecular interactions (such as hydrogen bonds), as well as interactions between molecules and the environment can be as important as interactions within molecules. Such interactions may occur between two atoms that are graph-theoretically distant (for example, between the “head” of a molecule and the “tail” of a nearby molecule). Since 2-DRFWL(2) GNN only keeps embeddings for node pairs $(u,v)$ with $d(u,v)\leqslant 2$, it is harder for it to capture such long-range interactions between distant nodes. In contrast, SSWL, SSWL+, LFWL(2) and SLFWL(2) keep an embedding for every node pair $(u,v)$, no matter how far $u$ is from $v$. This makes them easier to learn long-range interactions, compared with 2-DRFWL(2) GNN. We also remark that as shown in Table 3 of our main paper, subgraph GNNs like NGNN and I$^2$-GNN perform even worse than 2-DRFWL(2) GNN on the targets $U_0,U,H$ and $G$. This may be because **the $k$-hop subgraph extraction procedure in subgraph GNNs greatly inhibits the propagation of information between distant nodes**. In subgraph GNNs, a node is even ignorant of the existence of a far-away node since that node simply does not exist in its $k$-hop subgraph. On the other hand, though, in 2-DRFWL(2) GNN such a far-away node is still perceptible since the receptive field enlarges as we stack more 2-DRFWL(2) GNN layers. The above discussion actually shows that our model strikes a balance between **generating fine representations for local substructures** and **capturing long-range interactions**, compared with either subgraph GNNs or the GNNs proposed in [35].
null
null
null
null
null
null
UltraRE: Enhancing RecEraser for Recommendation Unlearning via Error Decomposition
Accept (poster)
Summary: This paper studies the recommendation unlearning problem and focuses on the exact unlearning approach. The authors rethink the exact unlearning framework from an ensemble-based perspective, and decompose the error into three components. The authors mainly modify the existing SOTA framework regarding the first and the third components. For the first component, a new optimal balanced clustering algorithm is proposed to improve efficacy and efficiency. For the third component, the model aggregator is simplified to improve efficiency. Extensive experiments are conducted on three real-world datasets and across different recommendation models. Strengths: 1. The topic (recommendation unlearning) is significant and timely. Recommendation is a typical scenario of machine unlearning where the data naturally comes from diverse users. 2. This paper rethinks the exact unlearning framework from an ensemble-based perspective, and modifies the framework with theoretical guidance, which distinguish this work from prior intuition-motivated modification. 3. One key technical contribution of this paper is the proposed optimal balanced clustering algorithm, which incorporates the balanced constraint into the optimization process, addressing the incongruity between user similarity and shard balancing. 4. Extensive experiments are conducted to evaluate the proposed framework. The evaluations include the performance of two unlearning goals and an ablation study of each stage. 5. In general, this paper is well-organized and easy to follow. Weaknesses: 1. The optimization process of Eq (5) is not clear. Is it identical to prior work [6, 7]? Please provide more details. 2. As shown in Table 2, the improvements in stages I and III appear to be significant. But the time costs of these stages are not comparable with that of stage II, where the authors remain unchanged. Therefore, this makes the overall improvement insignificant. 3. There are some typos in the paper: Line 203 “\sum_i w_{ij}” -> “\sum_j w_{ij}” Line 365 “[6, 7, 24] Instead” -> “[6, 7, 24]. Instead” Technical Quality: 3 good Clarity: 3 good Questions for Authors: Please refer to Weakness 1. This paper focuses on exact unlearning, but approximate unlearning has gained much attention due to its efficiency. There is also an approximate unlearning method in recommendation unlearning. Although it adopts a different approach, it would be helpful to have a discussion about it. [a] Li, Y., Chen, C., Zheng, X., Zhang, Y., Gong, B., & Wang, J. (2023). Selective and Collaborative Influence Function for Efficient Recommendation Unlearning. arXiv preprint arXiv:2304.10199. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: NA Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Q1**: Optimization process in stage III (model combination). **Response**: Although different from prior work [6, 7] in terms of the model used, our method also utilizes the stochastic gradient descent algorithm for optimization. We will provide further explanations regarding these details during the revision. **Q2**: Significance of efficiency enhancement. **Response**: As mentioned in the “Broader Impacts and Limitations” section, a common limitation of ensemble retraining frameworks is the lack of experiments on large-scale datasets. Current experiments do not sufficiently demonstrate the efficiency enhancement, because the majority of time is spent during stage II (independent training)while the enhancements occur in stages I (non-overlapping division) and III (model combination). Thus, following [6], we further conduct experiments on ML-10M (large-scale dataset) with 50 shards (large shard number), and report the results in the table below. | ML-10M | DMF-RecEraser | DMF-UltraRE | LightGCN-RecEraser | LightGCN-UltraRE | |-----------|---------------|-------------|--------------------|------------------| | Stage I | 872.53m | 259.18s | 879.06m | 257.83s | | Stage II | 213.55s | 208.44s | 860.12s | 852.32s | | Stage III | 83.74s | 67.26s | 384.50s | 376.57s | | Total | 877.48m | 534.88s | 899.80m | 1,486.72s | The results indicate that our proposed UltraRE significantly improves efficiency compared to RecEraser. Specifically, in stages I and III, UltraRE demonstrates average efficiency improvements of **20,227.87%** and 13.30% respectively. Note that, in the large-scale dataset (ML-10M), our proposed clustering algorithm (OBC) significantly outperforms the BKM used in RecEraser, *reducing clustering time from several hours to just a few minutes* (in stage I, 872.53m/879.06m vs 259.18s/257.83s). The experimental results demonstrate higher efficiency improvements when compared to the results reported in the paper, providing additional evidence of the substantial efficiency enhancements achieved by our method. **Q3**: Typos. **Response**: We will carefully review the paper and fix any typos during revision. **Q4**: Discussion about approximate recommendation unlearning. **Response**: We will provide a brief introduction to the mentioned paper in the “Related Work” section. --- Rebuttal Comment 1.1: Comment: The author has addressed all my previous concerns. I will keep my positive rating.
Summary: This paper tackles the issue in recommendation unlearning algorithm, specifically focusing on RecEraser. The authors introduce UltraRE, a framework devised to optimize the RecEraser. UltraRE aims at mitigating three primary losses - redundancy, relevance, and combination. By integrating transport weights in the clustering algorithm, it addresses redundancy loss, balancing collaboration and balance. Besides, it simplify the complexity of the model combiner without diminishing efficacy. The authors put these modifications together to enhance both unlearning efficiency and model utility, and empirically validate the proposed framework through extensive experiments on three real-world datasets. Strengths: * The paper is written in a clear and engaging style, making it easy to follow. * It addresses a timely and critical issue, machine unlearning in recommender systems, which is essential in the context of data privacy regulations. * The proposed UltraRE framework is straightforward and effective. And the authors conduct extensive experiments on three real-world datasets to demonstrate the efficacy of the proposed framework. Weaknesses: * The improvements in efficiency and effectiveness brought by UltraRE are very marginal, raising questions about its practical impact and value. * The novelty in UltraRE is limited, as it mainly employs the existing method to improve the previous method RecEraser. Technical Quality: 3 good Clarity: 3 good Questions for Authors: In the ablation study, the authors investigated the inertia performance of clustering algorithms in data partitioning for different algorithms. From Figure 4, it is evident that OBC (UltraRE) and BRD (SISA) significantly outperform BKM (RecEraser) in terms of clustering inertia. However, UltraRE does not show a substantial improvement over RecEraser in unlearning performance, and interestingly, SISA performs worse than RecEraser. * Could the authors elaborate on the relationship between clustering performance (inertia) and unlearning performance? * Additionally, what specifically contributes to the improvements presented by the proposed method, UltraRE? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Q1**: Novelty of our proposed UltraRE. **Response**: Please refer to *Response to Q1* in the rebuttal to all reviewers. **Q2**: Relationship between clustering performance (inertia) and unlearning performance. **Response**: Our proposed UltraRE, being an exact unlearning approach, inherently achieves optimal unlearning performance (completeness). Thus, considering this observation and the comments provided, we infer that the unlearning performance you mentioned refers to model utility (recommendation performance). Inertia is the summation of the inner-cluster distance of all sample-centroid pairs, ranging in $[0, \infty]$. Model utility is evaluated by NDCG and HR, both of which range in $[0, 1]$. Therefore, the scales of inertia and model utility are different, leading to inconsistent improvements between them. To subtly investigate the relationship between inertia and model utility (NDCG@10), we conduct another ablation study. Specifically, we vary the clustering algorithms in stage I and use Logistic Regression (LR) in stage III for all compared methods, and report corresponding inertia and NDCG (during learning). We choose LR due to its comparable performance to ATTention networks (ATT), as shown in the "Effect of Combination" section. For easy comparison, we also report the result of original RecEraser and SISA. | ML-100K | Stage I | Stage III | Inertia (stage I) | DMF-NDCG (stage III) | LightGCN-NDCG (stage III) | |--------------|---------|-----------|-------------------|----------------------|---------------------------| | UltraRE | OBC | LR | 816.32 | 0.3847 | 0.3859 | | RecEraser-LR | BKM | LR | 1,738.24 | 0.3792 | 0.3808 | | RecEraser | BKM | ATT | 1,738.24 | 0.3795 | 0.3812 | | SISA-LR | BRD | LR | 852.50 | 0.3824 | 0.3836 | | SISA | BRD | AVG | 852.50 | 0.3716 | 0.3684 | | ML-1M | Stage I | Stage III | Inertia (stage I) | DMF-NDCG (stage III) | LightGCN-NDCG (stage III) | |--------------|---------|-----------|-------------------|----------------------|---------------------------| | UltraRE | OBC | LR | 729.46 | 0.4042 | 0.4241 | | RecEraser-LR | BKM | LR | 1,318.31 | 0.3968 | 0.4169 | | RecEraser | BKM | ATT | 1,318.31 | 0.3973 | 0.4171 | | SISA-LR | BRD | LR | 1,129.72 | 0.3972 | 0.4173 | | SISA | BRD | AVG | 1,129.72 | 0.3956 | 0.3941 | | ADM | Stage I | Stage III | Inertia (stage I) | DMF-NDCG (stage III) | LightGCN-NDCG (stage III) | |--------------|---------|-----------|-------------------|----------------------|---------------------------| | UltraRE | OBC | LR | 775.22 | 0.4294 | 0.4345 | | RecEraser-LR | BKM | LR | 1,319.61 | 0.4230 | 0.4278 | | RecEraser | BKM | ATT | 1,319.61 | 0.4234 | 0.4281 | | SISA-LR | BRD | LR | 1,052.74 | 0.4255 | 0.4273 | | SISA | BRD | AVG | 1,052.74 | 0.4063 | 0.4025 | Based on the presented tables, we obtain several key observations: * Comparing UltraRE, RecEraser-LR, and SISA-LR, we observe that a lower inertia is associated with a higher recommendation performance. This suggests a positive correlation between clustering performance and recommendation performance. * LR demonstrates similar recommendation performance to ATT, surpassing AVG, when compared to the LR-versions (RecEraser-LR and SISA-LR) and their original versions (RecEraser and SISA). This finding is consistent with the results from the "Effect of Combination" section. * In ML-1M and ADM datasets, SISA-LR achieves comparable recommendation performance to RecEraser-LR, while outperforming it in ML-100k. This aligns with the differences in inertia across the various datasets, indicating that (i) clustering performance generally correlates positively with recommendation performance, and (ii) the advancements in RecEraser are primarily attributed to the effectiveness of the combiner (ATT). In conclusion, superior clustering performance corresponds to improved recommendation performance. Our proposed clustering algorithm (OBC) exhibits substantial superiority over the compared methods. **Q3**: What contributes to the improvements of UltraRE? **Response**: For model utility improvement, it is attributed to the superior clustering performance of OBC (as shown in *Response to Q2*). For efficiency improvement, it is attributed to both LR and OBC (as shown in *Response to Q2* in the rebuttal for all reviewers). --- Rebuttal Comment 1.1: Comment: Thanks for the response. Most of my concerns were addressed, I will keep my positive rating.
Summary: This paper addresses the problem of recommendation unlearning, which arises due to privacy concerns and the right to be forgotten. The authors identify limitations in previous methods, which prioritize unlearning efficiency over preserving model utility and fail to optimize the balance between collaboration and balance. To address these limitations, the authors propose a new framework called UltraRE, which simplifies and powers RecEraser for recommendation tasks. UltraRE optimizes the equilibrium between collaboration and balance, ensures convergence on group data, and simplifies the combination estimator. The authors conduct extensive experiments on three real-world datasets and demonstrate that UltraRE outperforms the state-of-the-art recommendation unlearning framework, RecEraser, achieving an improvement in recommendation accuracy and unlearning efficiency on ML-100k, ML-1M, and ADM, respectively. Strengths: - The paper well-formalize machine unlearning into a three-target problem including unlearning completeness, unlearning efficiency, and model utility. - The proposed method cleverly transform KMeans into the Monge-Kantorovich problem, to enable optimization for the clustering process. - The proposed UltraRE performs superior in model utility as well as unlearning efficiency. Weaknesses: - The paper only focuses on exact unlearning, and ignore the important research line of approximate unlearning [1-2], which is much more efficient when dealing with multiple unlearning requests. - Compared to the SOTA method RecEraser, the proposed UltraRE only makes incremental improvements, including modifying the KMeans clustering algorithm and applying simpler aggregation method, which makes the technical contribution rather limited. - The experimental datasets are all of small size, which may limit the accuracy of efficiency test. [1] GIF: A General Graph Unlearning Strategy via Influence Function [2] GNNDelete: A General Strategy for Unlearning in Graph Neural Networks Technical Quality: 3 good Clarity: 2 fair Questions for Authors: - How can the proposed method maintain efficient when dealing with multiple deletion request at one time? - What and how significant is the major technical contribution of UltraRE compared to RecEraser? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: The paper targets at solving negative societal impact of existing works. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely thank you for your valuable comments and suggestions. We hope our response addresses your concerns. **Q1**: Discussion about approximate unlearning. **Response**: First of all, it is important to highlight that, in this paper, we focus on exact recommendation unlearning, which is orthogonal to the studies you mentioned, i.e., approximate graph unlearning, in the field of machine unlearning. Exact unlearning approaches fully guarantee the most fundamental requirement of unlearning, i.e., completeness. Algorithmic retraining has been established as the sole authoritative way of ensuring completeness [37]. This is the inherent advantage of exact unlearning and cannot be attained through approximate unlearning. We do agree that approximate unlearning approaches have gained much attention due to their efficiency, and we did not ignore them. Instead, we have reviewed various representative studies in the “Related Work” section. We sincerely apologize for missing the studies you mentioned, and we will ensure to include them during revision. **Q2**: How to maintain efficiency when dealing with multiple requests? **Response**: From a theoretical perspective, ensemble retraining frameworks can increase shard number and employ parallel training. In this way, the time required for unlearning is limited to sub-model retraining. Besides the unique authority and inherent advantages of exact recommendation unlearning that we mentioned in *Response to Q1*, it is noteworthy that within the context of recommendation, the approximate unlearning approaches are not as efficient as expected. This inefficiency can be attributed to two main reasons. First, the inherent computation of the Hessian matrix is time-consuming. Although approximations can be made to accelerate the computation process, these approximations compromise the completeness and model utility, and do not eliminate the need for at least one offline computation of the Hessian matrix. Second, both user and item embeddings are dense feature matrices that contain a large number of parameters, thereby further increasing computational overhead in the context of recommendation. From a practical perspective, current experiments do not sufficiently demonstrate the efficiency enhancement, because the majority of time is spent during stage II (independent training) while the enhancements occur in stages I (non-overlapping division) and III (model combination). Thus, following [6], we further conduct experiments on ML-10M (large-scale dataset) with 50 shards (large shard number), and report the results in the table below. | ML-10M | DMF-RecEraser | DMF-UltraRE | LightGCN-RecEraser | LightGCN-UltraRE | |-----------|---------------|-------------|--------------------|------------------| | Stage I | 872.53m | 259.18s | 879.06m | 257.83s | | Stage II | 213.55s | 208.44s | 860.12s | 852.32s | | Stage III | 83.74s | 67.26s | 384.50s | 376.57s | | Total | 877.48m | 534.88s | 899.80m | 1,486.72s | The results indicate that our proposed UltraRE significantly improves efficiency compared to RecEraser. Specifically, in stages I and III, UltraRE demonstrates average efficiency improvements of **20,227.87%** and 13.30% respectively. Note that, in the large-scale dataset (ML-10M), our proposed clustering algorithm (OBC) significantly outperforms the BKM used in RecEraser, *reducing clustering time from several hours to just a few minutes* (in stage I, 872.53m/879.06m vs 259.18s/257.83s). **Q3**: The experimental datasets are small-size, which may limit the accuracy of efficiency test. **Response**: As responded above, following SOTA [6], we have conducted experiments on ML-10M (the largest dataset used in [6]). The experimental results demonstrate higher efficiency improvements when compared to the results reported in the paper, providing additional evidence of the substantial efficiency enhancements achieved by our method. **Q4**: Major technical contribution compared with RecEraser (SOTA). **Response**: Please refer to *Response to Q1* in the rebuttal to all reviewers. --- Rebuttal 2: Title: A kindly request for your response Comment: We hope this message finds you well. We are writing to kindly request an update on the status of your response for our rebuttal. As the deadline for discussion draws near, we would greatly appreciate if you could let us know whether the rebuttal addresses your concerns. Thanks for your attention! --- Rebuttal 3: Title: Reminder: Response to the authors' rebuttal Comment: Dear Reviewer zuBB, I hope this message finds you well. We would like to remind you about your response to the authors' rebuttal. The authors have submitted their rebuttal, and we are eagerly awaiting your response to proceed with the final decision. Your expertise and input are highly valued, and we kindly request you to review the authors' rebuttal and share your thoughts as soon as possible. This will enable us to maintain the review timeline and provide timely feedback to the authors. The deadline for your rebuttal is August 21. Thank you for your time and dedication to the review process. AC
Summary: This paper investigates the problem of recommendation unlearning, with a specific focus on the exact unlearning approach. The authors adopt an ensemble-based perspective to redefine the exact unlearning framework and break down the framework into three components regarding prediction error. The primary modification made by the authors pertains to the first component, where a novel optimal balanced clustering algorithm is proposed. The authors also simplify the third component to improve efficiency. Extensive experiments are conducted across three benchmark datasets. Strengths: 1) The investigated problem is interesting. Unlearning has gained increasing attention in the field of privacy-preserving machine learning. This paper focuses on a practical scenario of unlearning, i.e., recommendation system. 2) This paper offers ease of comprehension. The ideas within are clearly presented. 3) This paper uses the theory of ensemble learning to provide a theoretical foundation for the exact unlearning approach. This provides valuable insights for improving the performance of the exact unlearning approach. 4) The authors propose a novel clustering algorithm which achieves an optimal trade-off between (i) shard balance and (ii) sample similarity. This modification effectively tackles the trade-off issue from an optimization perspective. Weaknesses: 1) The improvement brought by the proposed optimal balanced clustering algorithm appears to be inconsistent. On the one hand, the inertia is significantly reduced. On the other hand, the model utility is not significantly improved. Please explain this phenomenon. 2) There exist some small errors, for example, in line 211, the complexity is O(Nk) instead of O(N^2). Technical Quality: 3 good Clarity: 3 good Questions for Authors: The improvement brought by the proposed optimal balanced clustering algorithm appears to be inconsistent. On the one hand, the inertia is significantly reduced. On the other hand, the model utility is not significantly improved. Please explain this phenomenon. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors have discussed the issue that the efficiency improvement of stages I and III is not significant, and provided a practical experimental setting. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely thank you for your valuable comments and suggestions. We hope our response addresses your concerns. **Q1**: Inconsistent improvements between inertia and model utility. **Response**: Inertia is the summation of the inner-cluster distance of all sample-centroid pairs, ranging in $[0, \infty]$. Model utility is evaluated by NDCG and HR, both of which range in $[0, 1]$. Therefore, the scales of inertia and model utility are different, leading to inconsistent improvements between them. To subtly investigate the relationship between inertia and model utility (NDCG@10), we conduct another ablation study. Specifically, we vary the clustering algorithms in stage I (non-overlapping division) and use Logistic Regression (LR) in stage III (model combination) for all compared methods, and report corresponding inertia and NDCG (during learning). We choose LR due to its comparable performance to ATTention networks (ATT), as shown in the "Effect of Combination" section. For easy comparison, we also report the result of original RecEraser and SISA. | ML-100K | Stage I | Stage III | Inertia (stage I) | DMF-NDCG (stage III) | LightGCN-NDCG (stage III) | |--------------|---------|-----------|-------------------|----------------------|---------------------------| | UltraRE | OBC | LR | 816.32 | 0.3847 | 0.3859 | | RecEraser-LR | BKM | LR | 1,738.24 | 0.3792 | 0.3808 | | RecEraser | BKM | ATT | 1,738.24 | 0.3795 | 0.3812 | | SISA-LR | BRD | LR | 852.50 | 0.3824 | 0.3836 | | SISA | BRD | AVG | 852.50 | 0.3716 | 0.3684 | | ML-1M | Stage I | Stage III | Inertia (stage I) | DMF-NDCG (stage III) | LightGCN-NDCG (stage III) | |--------------|---------|-----------|-------------------|----------------------|---------------------------| | UltraRE | OBC | LR | 729.46 | 0.4042 | 0.4241 | | RecEraser-LR | BKM | LR | 1,318.31 | 0.3968 | 0.4169 | | RecEraser | BKM | ATT | 1,318.31 | 0.3973 | 0.4171 | | SISA-LR | BRD | LR | 1,129.72 | 0.3972 | 0.4173 | | SISA | BRD | AVG | 1,129.72 | 0.3956 | 0.3941 | | ADM | Stage I | Stage III | Inertia (stage I) | DMF-NDCG (stage III) | LightGCN-NDCG (stage III) | |--------------|---------|-----------|-------------------|----------------------|---------------------------| | UltraRE | OBC | LR | 775.22 | 0.4294 | 0.4345 | | RecEraser-LR | BKM | LR | 1,319.61 | 0.4230 | 0.4278 | | RecEraser | BKM | ATT | 1,319.61 | 0.4234 | 0.4281 | | SISA-LR | BRD | LR | 1,052.74 | 0.4255 | 0.4273 | | SISA | BRD | AVG | 1,052.74 | 0.4063 | 0.4025 | Based on the presented tables, we obtain several key observations: * Comparing UltraRE, RecEraser-LR, and SISA-LR, we observe that a lower inertia is associated with a higher recommendation performance. This suggests a positive correlation between clustering performance and recommendation performance. * LR demonstrates similar recommendation performance to ATT, surpassing AVG, when compared to the LR-versions (RecEraser-LR and SISA-LR) and their original versions (RecEraser and SISA). This finding is consistent with the results from the "Effect of Combination" section. * In ML-1M and ADM datasets, SISA-LR achieves comparable recommendation performance to RecEraser-LR, while outperforming it in ML-100k. This aligns with the differences in inertia across the various datasets, indicating that (i) clustering performance generally correlates positively with recommendation performance, and (ii) the advancements in RecEraser are primarily attributed to the effectiveness of the combiner (ATT). In conclusion, superior clustering performance corresponds to improved recommendation performance. Our proposed clustering algorithm (OBC) exhibits substantial superiority over the compared methods. **Q2**: Typos. **Respond**: We will carefully review the paper and fix any typos during revision. --- Rebuttal Comment 1.1: Title: Response to Authors' Rebuttal Comment: Thanks for your response. Your response have addressed my concerns. Please include these additional experimental results in the main content of the manuscript.
Rebuttal 1: Rebuttal: We sincerely thank all the reviewers for their valuable comments and suggestions, which are crucial for improving our work. We hope our response addresses your concerns. **Q1**: Major technical contribution compared with RecEraser (SOTA). **Response**: We summarize the contributions in the table below, followed by a detailed explanation. |No.|Contributions|UltraRE (ours)|RecEraser [6]|Significance| |-|-|-|-|-| |1|Analyse with error decomposition, aiming to reduce redundancy loss (stage I) and combination loss (stage III)|Theory|Intuition|The first exact unlearning method that is guided by theory| |2|Propose a novel clustering algorithm (stage I)|OBC|BKM [6, 7, 24]|Improve recommendation performance by 1.28%, reduce time cost by 98.11% (20,227.87% on ML-10M)| |3|Simplify model combiner (stage III)|LR|ATT|Achieve comparable recommendation performance, reduce time cost by 11.95% (13.30% on ML-10M)| Firstly and most importantly, we rethink exact unlearning from the perspective of ensemble learning and provide a theoretical analysis for prediction error, which serves as the theoretical basis for existing exact unlearning approaches, i.e., ensemble retraining frameworks. To the best of our knowledge, this is the first exact recommendation unlearning method that offers theoretical guarantees, setting it apart from previous work [6, 24] that relied on the intuition of collaboration preservation. Building upon this theory, we decompose the error into three distinct components and establish correlations with corresponding stages. Our goal is to enhance performance by reducing the error at each stage (I and III). While this may appear similar to the improvement direction pursued by prior works [6, 7, 24], we want to emphasize that our approach is guided by theory, not intuition. In other words, our work provides theoretical support for prior intuitions. Secondly, we propose a novel algorithm to address an important issue in stage I, i.e., the incongruity between sample similarity and shard balance. Prior work [6, 7, 24] relies on the manual insertion of a priority list to achieve balanced clustering (BKM). This approach, once again, relies solely on intuition without any theoretical support. As shown in our empirical study (Figure 4), BKM fails to effectively resolve this incongruity, compromising sample similarity to achieve shard balance. In contrast, we propose a novel balanced clustering algorithm (OBC) that aims to obtain the optimal solution by optimizing the Monge-Kantorovich Problem. To further enhance efficiency, we solve the optimization objective through Sinkhorn distance. The experimental results (Tables 2 and 3, and Figure 4) demonstrate that the proposed algorithm significantly outperforms compared methods, efficiently achieving an equilibrium between the incongruity. Our freshly included experimental results in *Response to Q2* on a large-scale dataset provide additional evidence of the substantial efficiency enhancements achieved by our method. Thirdly, guided by the experience in ensemble learning, we simplify the model complexity by employing LR. Additionally, we conduct an empirical study to validate that the model complexity in stage III can be reduced while maintaining comparable performance. **Q2**: Significance of efficiency improvement. **Response**: Current experiments do not sufficiently demonstrate the efficiency enhancement, because the majority of time is spent during stage II (independent training) while the enhancements occur in stages I (non-overlapping division) and III (model combination). Thus, following [6], we further conduct experiments on ML-10M (the largest dataset used in [6]) with 50 shards (large shard number), and report the results in the table below. | ML-10M | DMF-RecEraser | DMF-UltraRE | LightGCN-RecEraser | LightGCN-UltraRE | |-----------|---------------|-------------|--------------------|------------------| | Stage I | 872.53m | 259.18s | 879.06m | 257.83s | | Stage II | 213.55s | 208.44s | 860.12s | 852.32s | | Stage III | 83.74s | 67.26s | 384.50s | 376.57s | | Total | 877.48m | 534.88s | 899.80m | 1,486.72s | The results indicate that our proposed UltraRE significantly improves efficiency compared to RecEraser. Specifically, in stages I and III, UltraRE demonstrates average efficiency enhancements of **20,227.87%** and 13.30% respectively. Note that, in the large-scale dataset, our proposed clustering algorithm (OBC) outperforms the BKM used in RecEraser, *reducing clustering time from several hours to just a few minutes* (in stage I, 872.53m/879.06m vs 259.18s/257.83s). The experimental results demonstrate higher efficiency improvements when compared to the results reported in the paper, providing additional evidence of the substantial efficiency enhancements achieved by our method.
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Self-supervised Object-Centric Learning for Videos
Accept (poster)
Summary: This paper introduced a fully unsupervised method, SOLV, for segmenting multiple objects in real-world sequences. In this paper, the author employed the token drop strategy to reduce computation and enhance regularization. Additionally, Spatial-temporal Binding was proposed to aggregate the features of objects within and across frames, resulting in a remarkable improvement in mIoU. To address over-clustering, the author employed Agglomerative Clustering to merge slots. It is noteworthy that SOLV not only significantly advances the state-of-the-art on commonly used simulated data but also stands out as the first fully unsupervised method to demonstrate state-of-the-art performance on unconstrained videos from the Youtube-VIS dataset. Strengths: 1. A fully unsupervised object-centric learning method for scaling multi-object segmentation to in-the-wild videos was first proposed and achieved significant improvements on traditional synthetic datasets. Moreover, it has delivered impressive results on real-world video datasets. 2. Many details in the method, such as token drop ratio, number of slots, and variations of the visual encoder, have undergone ablation experiments, enhancing the robustness and reproducibility of the work. 3. The slot merging strategy significantly reduces the sensitivity of the model's performance to the number of slots. 4. Both temporal binding and spatial binding techniques bring significant gains in mIoU. Weaknesses: 1. A more elaborate explanation is required to clarify how the Agglomerative Clustering algorithm is employed to dynamically determine the optimal number of slots. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. In the ablation experiments, a notable observation arises regarding the performance of Model-C and Model-D compared to their predecessor, Model-B, as they exhibit a decrease in FG-ARI. However, Model-E, which combines the spatial bind and temporal bind components from both Model-C and Model-D, ultimately achieved superior results in terms of FG-ARI. What are the possible reasons behind this phenomenon? 2. Whether the shape of $\mathcal V_t$ at line 120 is indeed $\mathbb R^{(2n+1)\times N \times (3P^2)}$, and the dimension of $\mathcal V_t'$ in Equation (2) should be $\mathbb R^{(2n+1)\times N \times (3P^2)}$. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: 1. As discussed at the end of the paper, it is acknowledged that Agglomerative Clustering is non-differentiable during the training process. Therefore, it might be worthwhile to explore alternative clustering algorithms as potential alternatives. 2. Based on the visualized results, it is evident that SOLV excels in the detection and clustering of foreground objects. However, there is room for improvement in terms of the accuracy of segmentation boundaries. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We extend our sincere appreciation to the reviewer for their valuable insights and constructive feedback. We have responded to their questions and concerns, and we hope our explanations thoughtfully encompass the raised points. ## W1. Explanations on slot merging Please refer to Q4 of the global response. ## Q1. Reasons behind the model behaviours in the ablation Please refer to Q2 of the global response. ## Q2. Typo in the text We appreciate the reviewer for their extremely careful reading and inspection. We will add the missing patch size in line 120. --- Rebuttal Comment 1.1: Comment: Thank authors for the rebuttal. It solves my concerns. I keep my original rating as accept.
Summary: The paper introduces an approach to segment multiple objects for video sequence in both real and synthetic data without utilizing any additional signals besides RGB frames. It has 3 components including a visual encoder, a spatial-temporal binding model used for grouping pixels into slots across different time, and a visual decoder to get the segmentation mask. The method also uses a slot merging strategy to address the over-segmentation issue caused by a fixed number of slots. The paper conducts experiments and it outperforms previous methods on synthetic and real data. The ablation studies are also presented to prove each component's effectiveness. Strengths: This paper can segment complex scenes in Youtube videos and achieve satisfying qualitative results. It didn’t rely on extra signals in videos which is claimed not stable in video segmentation. The drop approach can save memory and calculation time successfully. This paper achieves great performance on MOVi-E. Weaknesses: This paper didn't test on single object segmentation dataset like DAVIS 2017 and compare methods like CIS, Y Yang 2019. Technical Quality: 3 good Clarity: 3 good Questions for Authors: For the video segmentation dataset like DAVIS17, how could you choose which segment mask as the object mask, not the background, used to calculate the metric? How to make sure that the merging mechanism can successfully merge the masks into a whole object? How to determine which parts should be merged into one object? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: It could be better to visualize several cases of each slot's segmentation mask results for slot=8 which is claimed to be the best in Table 4. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate the feedback and valuable comments you have shared. In response, we have furnished explanations, aiming to address your concerns effectively, hoping the scores can be raised accordingly. ## W1. Single object segmentation We did test DAVIS2017 in our paper, the results are reported in Table 2, it is a dataset of multi-object video segmentation. In terms of single video object segmentation, existing datasets are usually defined for motion segmentation, that assigns all pixels with similar motion into one group, regardless of semantic categories. While in our case, we aim to segment objects based on semantics, for example, in the given example from DAVIS17 (Fig 14 (a) in FD), the person and the bicycle are segmented into two different groups, consistent with the ground-truth (Fig 14 (b) in FD) and mIoU is calculated as 60.0. However, when ground-truth annotations are turned into single-object evaluation (Fig 14 (c) in FD), mIoU score drops to 41.2 (for the whole video). Nevertheless, we provide the results of our models on DAVIS17, which is designed for multi object segmentation: - 6 slots, no merging: 39.06 mIoU - 8 slots, merging: 47.50 mIoU - 4 slots, merging: 49.01 mIoU - CIS [1]: 53.1 mIoU - DyStaB [2]: 58.9 mIoU ## Q1. Mask matching for evaluation We follow the common practice in this community, and use Hungarian Matching for evaluation, as also used in DINOSAUR, SAVi, OCLR. We will clarify this evaluation metric in our final paper. ## Q2. Slot merging mechanism Please refer to Q4 of the global response. ## L1. Qualitative examples We have provided additional qualitative results of our best model for masks of each slot in Figure 9 of Supplementary. Note that, although the model is trained with 8 slots, some cases result in fewer masks due to dynamically set number of slots in slot merging. **References** [1] Yang et al., Unsupervised Moving Object Detection via Contextuaş Information Separation, CVPR19 [2] Yang et al., DyStaB: Unsupervised Object Segmentation via Dynamic-Static Bootstrapping, CVPR21 --- Rebuttal Comment 1.1: Comment: Thank you very much for your clarification which addresses my lots of concerns. While I still have some questions: (1) Why when ground-truth annotations are turned into single-object evaluation the mIoU score drops to 41.2 from 60.0? Could you please provide any insights or analysis about this? (2) As reiviewer xws8 referred that this method seem not to be able to segment the accurate object boundary, I understand this is due to the algorithm runs at patch-level image token input but how could this weakness be improved? --- Reply to Comment 1.1.1: Comment: Thank you for your insights and the questions raised. (i) The drop in mIoU when transitioning from multi-object to single-object evaluation is due to the nature of the evaluation protocol: * The single-object evaluation is on **motion segmentation**, meaning, all pixels with the **same motion** are grouped as one object, **regardless of their semantic category**. This naturally leads to a low evaluation score for our model. * To illustrate such an effect, we use an example, for an image of a person riding a bike, in single-object segmentation benchmarks, both person and bike are **assigned the same label in ground truth**, as they are undergoing the same motion, while our prediction will assign **different labels to person and bike.** * During evaluation on such a benchmark, our model naturally **“over-segment”** the scene from the perspective of motion segmentation, and we can only compute mIoU between ground truth and person, or between ground truth and bike, in any case, our model gets penalised incorrectly, even though the model has correctly segment the bike and person as two categories. * For example, for the example provided in Fig 14 in FD, * Multi-object evaluation ((a) and (b)): [ IoU(bicycle slot, bicycle mask) + IoU(person slot, person mask) ] / 2 * Single-object evaluation ((a) and (c)): IoU(person slot, bicycle + person mask) (ii) We agree with the reviewer that this is an interesting and important research direction to pursue. We believe that learning pixel correspondence might play an important role by improving predictions around the boundaries. In our early experiments, to provide the model with pixel-level information, we experimented with reconstructing optical flow and extra consistency losses for pixel-level correspondence. However, mostly due to unstable and incorrect flow predictions on complex YTVIS videos, the results did not improve. On the other hand, we believe that recent work on long-term pixel tracking [1] might help achieve sharp boundaries with better correspondences on unconstrained videos. We will explore this option in future work. [1] Harley et al., Particle Video Revisited: Tracking Through Occlusions Using Point Trajectories, ECCV22
Summary: This paper proposes an unsupervised segmentation method in videos. The backbone network is pre-trained with the self-supervised method. Then, the model spatially binds objects to slots on each frame and then relates these slots across frames. The framework is trained to reconstruct the middle frame in a high-level semantic feature space. Strengths: 1. The proposed method is able to segment a real work video without supervision by utilizing a powerful pre-trained feartures. 2. The performance of the proposed method is better than comparison methods. 3. The method is evaluted on both sythethic and real-world dataset. Weaknesses: 1. Although the method can roughly segment the objects in videos, the method is not capable of detecting the boundary of objects. 2. The method can only be applied to videos since it relies on temporal information. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. What's the advantage of the proposed model compared to the recent Segment Anything model? 2. In slot merging, whether the clustering method will affect the performance? 3. What if the method is applied to more complex scenes which have more than 12 objects? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: see weakness Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We are grateful for the reviewer's feedback and perceptive remarks. We have provided responses and hope these will resolve any issues you've raised. ## W1. The method is not capable of detecting the boundary of objects We agree with the reviewer. Our method currently only segments objects at patch-level (due to the ViT backbone), and this causes a loss of pixel-level details at the boundaries. However, given more computation resources available, scaling the image resolution, reducing the patch size, or using feature pyramid may resolve this limitation, we leave this as future work. ## W2. The model relies on temporal information We agree with the reviewer. We believe that using temporal information for self-supervised video object segmentation is the right way to go, as we humans are experiencing dynamic scenes everyday, rather than static frames. ## Q1. Differences to SAM The fundamental difference lies in the source of supervision signals. Specifically, SAM is a supervised model, trained with 11M images and 1B+ masks, which is extremely costly to produce, while in contrast, our proposed model is trained in a self-supervised manner, by that we mean, the entire training procedure **does not** require any manual annotations. ## Q2. Different clustering algorithms We cannot use other clustering algorithms that require a predefined number of clusters. In agglomerative clustering, the number of clusters are dynamically set based on a threshold. For more details, please refer to the Q4 of the global response. ## Q3. Slot number Please refer to Q3 of the global response. --- Rebuttal Comment 1.1: Title: Reply to Author Rebuttal Comment: I have read other reviews and the author's rebuttal. Thanks for the response which addressed part of my concerns. I agree with the authors that SAM is a supervised model. However, it generalizes well to unseen scenes. Since we already have this kind of model, what's the point of designing from scratch instead of working based on it? I think a zero-shot segmentation method based on SAM holds greater significance than the proposed method. --- Reply to Comment 1.1.1: Comment: Thank you for your valuable feedback and time. (i) While we acknowledge the potential of SAM, we would like to emphasise the importance of object-centric learning without any annotation or extra modalities. From a scientific perspective, we explore the potential of object-centric learning without any labels, pursuing the same ability as human infants can achieve. This, we believe, can spark off novel solutions to the multi-object segmentation challenge. The unsupervised nature of our model allows for training with diverse video data without expensive labour to obtain manual annotations. (ii) Following the reviewer’s suggestion, we started exploring the potential of SAM for our task. We tested it on some YTVIS frames and found that SAM consistently over-clusters objects into parts. This requires post-processing techniques to merge parts into objects, for example, similar to our slot merging approach. This is an interesting direction to pursue in future work. Lastly, while we are working on this project, SAM was not released, in fact, it is only accepted by ICCV23, and according to the review policy, we should still treat it as an unpublished paper.
Summary: This paper proposes a new self-supervised method for multi-object segmentation in videos called SOLV (Self-supervised Object-centric Learning for Videos). It adopts axial spatial-temporal slot attention to group pixels into slots within frames and then relates these slots across frames to track objects. It uses the masked autoencoder training objective to reconstruct visual features from latent slots. A slot merging strategy based on agglomerative clustering is proposed to address over-segmentation by dynamically merging similar slots. Experiments show state-of-the-art results on the MOVi-E synthetic dataset and Youtube-VIS 2019 real videos without requiring additional modalities like optical flow. Strengths: The paper is well-written and easy to follow. The problem setup and overall approach are clearly explained. The performance looks great. The masked feature reconstruction seems interesting. The spatial-temporal binding and the slot merging make sense to me. Weaknesses: A comparison with self-supervised baselines on the real data (DAVIS17 / YTVIS19) needs to be added, as there is a huge performance gap between synthetic data and real data (80 vs 30). Such a huge gap makes it questionable whether the results on synthetic data is really representative. Also, it would be great to also add recent supervised methods to help readers understand the gap between self-supervised / supervised methods. It would be great also to add an error analysis to showcase the failure mode and model behaviors. It seems that the model needs a superior pretrained backbone (DINOv2) to achieve good performance. DINOv2 is pretrained on a large, curated dataset LVD-142M, which gives the authors an advantage over the baselines. A more fair comparison should be made in Table 2 when compared with DINOSAUR, which is equipped with DINO. I encourage the authors use more text to describe the necessity of using a well-pre-trained encoder instead of simply saying using a frozen encoder in the model. Given those concerns, I feel like the current results do not show enough evidence for acceptance, I'd be happy to increase my rating if the authors could address my concerns by showing more results (especially for real data with the same visual encoder). Misc: I suggest the authors revise Fig.2 and its caption to make it more self-contained. Now the illustration is unclear and has many symbols that are not explained in the caption, making it difficult to understand without referring to the text. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Please see the weakness part. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the positive feedback and valuable comments, we have provided detailed clarification, hope these can resolve your concern, thus raise the score accordingly. ## W1. Comparisons on real data We agree with the reviewer that performance on synthetic data is questionable, this is exactly the reason to benchmark on real videos. As far as we know, our work is the first model that tries to segment multiple objects on **real-world data**, without relying on any other modality, for example, optical flow, depth etc. In Table 2, we have compared our model to the recent SOTA unsupervised video segmentation model, namely, OCLR, that is trained by using optical flow, and SOTA image-based unsupervised segmentation, namely, DINOSAUR (with a powerful DINOv2 backbone), in Table 3. Supervised models for unsupervised video object segmentation generally exploits pretrained object detectors, as requested by the reviewer, for reference, existing supervised SOTAs of UVOS in DAVIS17 validation set are Propose-Reduce [1] and UnOVOST [2] with mIoUs of 67.0 and 66.4, respectively. We will add these numbers into the table in our final paper. ## W2. Error analysis Thanks for the suggestion. In fact, we have actually provided common failure cases in Supplementary (line 136 and Figure 13), we will move those to the main text of the final paper with more explanations. Generally speaking the failure mode can be cast into four categories: **(i) lack of sharp boundaries:** This is because we encode and process the image on the patch-level (14 x 14 patches), therefore cannot recover some of the pixel-level details such as boundaries while upsampling the patch-level segmentation results into the original resolution. **(ii) overclustering:** This refers to the case where a single object is represented by multiple slots, this often occurs when the object has distinct visual features that are hard to be grouped together. **(iii) underclustering nearby instances:** When multiple instances of the same semantic class are represented by a single slot, it occurs due to their similar features and positional encodings. This similarity, stemming from their spatial proximity, results in their reconstruction using just one slot. **(iiii) missing small objects:** When encountering small objects, the model often doesn't allocate a separate slot for them because they minimally impact the reconstruction loss. Qualitative examples of failure cases can be found in Supplementary, Figure 13. ## W3. The model needs a superior pre-trained backbone Thank you for the comments. In order to compare with DINOSAUR fairly, we have re-trained it with a DINO-v2 backbone, as explained in line 263. Comparing Model-A (DINOSAUR) and Model-E (ours) in Table 3, it can be observed that ours substantially outperform the DINOSAUR model, using the same backbone, i.e., 37.8 vs. 45.3 mIOU. In addition, we also experimented with the DINO backbone in Table 5, and it can be seen, our model even outperforms DINOSAUR with DINO-v2 backbone, i.e., 37.8 vs. 41.9 mIOU. We have provided more discussion about the effect of different backbones (specifically DINO vs DINOv2) and provide qualitative results in Supplementary. More examples with DINO and DINO-v2 are shown in Fig 10, 11 in FD. In some cases, DINO backbone performs better than DINO-v2, mostly due to more distinct features of different regions leading to overclustering (Fig 12, 13 in FD). The DINO series has been widely used for unsupervised object segmentation in the literature. As suggested by the reviewer, we will add DINOSAUR with DINOv2 backbone (model-A) to Table 2 as well, and add more discussion on the necessity of using a well-trained self-supervised backbone in line 293 paragraph. ## W4. Misc Thank you for the suggestion, we will revise the caption of the main figure and explain each symbol and operation briefly. **References** [1] Lin et al., Video Instance Segmentation with a Propose-Reduce Paradigm, ICCV21 [2] Luiten et al., Unsupervised Offline Video Object Segmentation and Tracking, WACV20 --- Rebuttal Comment 1.1: Comment: Thanks for the authors' response, the comparison now makes sense to me. I will raise my rating to borderline accept --- Reply to Comment 1.1.1: Comment: Thank you for re-evaluating our work and adjusting the score. We sincerely appreciate your time and thoughtful feedback.
Rebuttal 1: Rebuttal: We appreciate all reviewers for their valuable comments and feedback. We hope the following response can fully resolve the raised concerns. For referred images, please see the **Figure Document (FD)** attached below. ## Q1. Contribution Summary We would like to start the rebuttal by elaborating our contributions in this paper: **(i) On the considered task:** We propose the first unsupervised video multi-object segmentation model that can work on both real-world and synthetic videos, without relying on any other modality, for example, optical flow, depth etc. **(ii) On architecture design:** we propose a conceptually simple architecture that consists of * **DINO backbone:** *We are the first to address multi-object segmentation in real-world videos by performing reconstruction in the feature space of DINO.* Furthermore, we perform masked feature reconstruction which provides efficiency in time and memory (as pointed out by reviewers Sj2S, gzBP) * **S-Bind & T-Bind:** *We extract spatially invariant slots (S-Bind) and relate them to each other through time (T-Bind).* Our spatial binding module is a modified version of ISA, as we mentioned in the paper. T-Bind attends slots with the same index across time for consistent slot representations. The two binding modules provide complementary information to slots * **Slot Merging:** *Given distinctive slot representations learned by our model, we merge slots to solve over-clustering issue.* Over-clustering is a well-known issue in unsupervised object-centric segmentation due to predefined number of slots. Our merging module adopts a simple agglomerative clustering to tackle this problem, as will be detailed in response to Q4 * **Overall Pipeline:** *We propose a distinct architecture compared to previous work*. Specifically, existing object-centric methods for videos follow a similar approach by simply propagating the updated slot information to the next frame through the video. We adopt a totally different approach by extracting slots in frame level, relating them temporally and training it with feature reconstruction **(iii) Comparison with other approaches:** Compared to other methods, ours outperforms in synthetic data (Table 1) and handles real-world videos. Ablation studies validate our binding modules' efficacy (Table 3) and emphasise the memory efficiency of our masked feature reconstruction (Fig 7). Our slot-merging addresses the over-clustering issue prevalent in object-centric models (Table 4) ## Q2. Behaviours of Models Reviewers ask about the performance variance between models in our model ablation table (Table 3). In our ablation study, two key observations emerge: * FG-ARI, which we report to follow the common practice, is not an accurate indicator of segmentation quality in videos, as mIoU is the widely accepted metric for such tasks * Both model-C and model-D tend to over cluster objects when compared to model-B We explain the reasons below with examples in detail: * FG-ARI considers only in-frame segmentation. FG-ARI does not consider the consistency of indices across frames since it is calculated per frame. On the other hand, mIoU evaluates temporal consistency * FG-ARI becomes either 1 or 0 when there is only one object. FG-ARI calculation becomes problematic when there is a single object to evaluate. Assume that there is only one labelled object and 4 pixels for it: `ARI([0, 0, 0, 0], [0, 1, 1, 1]) = 0`. Therefore, if there is only one GT object with a small mistake in the mask, FG-ARI considers it completely wrong. This works for the benefit of model-B roughly segmenting the objects (Fig 1, 2 in FD) * Each binding module increases the slot specialisation. As reflected in quantitative results, combining the two binding modules increases the total consistency, both for merging and tracking. Slot specialisation is reinforced by both temporal information and invariant visual representations. To show this behaviour qualitatively, we visualise the similarity matrices comparing slot representation inframe and between consecutive frames for model-C, model-D, and model-E in Fig 3, 4, 5 in FD, respectively * Due to slot specialisation, over-clustering occurs more in model-C and model-D. After inspecting specific cases where model-D and model-C have higher mIoU but lower FG-ARI than model-B, we found that increased slot specialisation leads to overclustering. Specialisation improves tracking, but FG-ARI does not get affected by errors in tracking. Please see Fig 6 in FD to compare segmentations of model-B and model-D for a sample video. * Overall, due to (i) issues with FG-ARI (ii) tendency to over cluster, model-C and model-D have a lower FG-ARI than model-B. On the other hand, full model (model-E) reaches the FG-ARI of model-B also with a large improvement in mIoU (7.09). ## Q3. Number of Slots vs. Number of Objects The model tends to undercluster the scene if the predefined number of slots is less than the optimal number. Specifically, our model has a tendency to merge nearby objects into a single slot (Fig 7 (a), (b) in FD) and misses some objects in the background (Fig 7 (c) in FD) when the optimal number of slots is higher than the predefined one. ## Q4. On Slot Merging with Agglomerative Clustering Empirically, we found that parts of an object might be assigned to separate slots, though these slots still have similar representations (please refer to Fig 3 of the paper). Agglomerative Clustering (AC) works by merging data points based on similarity until a group similarity threshold is reached, therefore, we group similar slots into a dynamically defined number of clusters with AC. It does not need to set a predefined number of clusters as in k-means, and instead, it can dynamically determine the number of groups based on an empirically found similarity threshold. We provide some clustering examples with different threshold values, using the slots of the model trained without slot merging in Fig 8, 9 in FD. Pdf: /pdf/b60cebc500dd53f4d31f889c41c8576d3644f261.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: This paper proposes a Self-supervised Object-centric Learning framework for unsupervised multi-object video segmentation. To achieve this, this paper proposes to derive object-centric representations in a self-supervised manner to facilitate video segmentation tasks. The proposed approach adopts axial spatial-temporal slots attention to group visual regions with similarity properties, while exploiting the training techniques of masked feature reconstruction and slot merging. Experimental results on synthetic and real video datasets demonstrate superior performance against previous works. Strengths: 1. Learning object-centric representations in an unsupervised fashion is vital yet challenging for several downstream applications, such as video object segmentation or robot manipulation. The task aiming to be addressed in this paper is significant. 2. The experiment results and analysis both in the main paper and supplementary material are comprehensive. 3. The overall paper is easy to follow. Weaknesses: My primary concern lies in the novelty and significance of the proposed framework. The novelty and contributions of the proposed approach are unclearly described in this paper. In the current paper draft, though it employs and modifies a series of existing techniques, it is not evident how the proposed approach is novel or provides significant advancements over existing methods. For example, COMUS [A] also learns self-supervised object-centric representations from DINO, aiming to achieve unsupervised object segmentation. The spatial binding is slightly modified from invariant slot attention [3], and the temporal binding is simply realized by the self-attention mechanism. In the visual decoder, the slot merging process is based on Agglomerative Clustering (AC) algorithm, while the decoder architecture follows DINOSAUR [62]. In summary, the proposed method seems to be a combination of existing approaches to perform the task of multi-object video segmentation, it is currently unclear how their combination in this framework constitutes a significant improvement or novelty over existing methods. [A] Zadaianchuk et al., Unsupervised Semantic Segmentation with Self-Supervised Object-Centric Representations. ICLR 2023 Technical Quality: 2 fair Clarity: 3 good Questions for Authors: 1. This proposed method utilizes a pre-defined number of slots to allow multi-object segmentation (e.g., 8 slots). Can the trained model be applicable to an image with more than 8 objects? 2. In Table 3, why did adding spatial and temporal binding (Model-C and D) deteriorate the FG-ARI score compared with Model-B? 3. Also in Table 3, without temporal binding (Model-D) just slightly drops the performance of the full version (Model-E). Does it imply that the temporal information would not be necessarily considered if the frame-by-frame segmentation is well performed? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: The authors have listed the limitations of this proposed method and the potential future research directions to alleviate these raised issues. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate the reviewer's thorough review and insightful comments. We have offered comprehensive clarifications and hope these will address your concerns. ## W1. Concern on novelty and significance Please refer to Q1 of the global response for our contributions. We would like to highlight that our model is distinct from COMUS [1]. COMUS clusters features of object proposals at the dataset level for image-level semantic segmentation and does not employ any slot-attention. ## Q1. On the number of slots vs. the number objects Please refer to Q3 of the global response. ## Q2. FG-ARI gets worse while adding spatial-temporal binding Please refer to Q2 of the global response. **References** [1] Zadaianchuk et al., Unsupervised Semantic Segmentation with Self-Supervised Object-Centric Representations. ICLR23 --- Rebuttal Comment 1.1: Comment: Thank you for your response and clarification. I have reviewed the comments from the other reviewers as well as the authors' responses. While the rebuttal for novelty (Q1) does detail the functionality of each module and mentions experimental comparisons and analysis provided in the main paper, it still doesn't clearly highlight the primary novel designs (e.g., architecture, training objectives) for this task. Most of the modules described in the main paper/response seem to be adopted or only slightly modified from existing techniques. As such, my initial concern—that the proposed framework seems to be a combination of existing approaches to perform the task—has not been sufficiently addressed. Therefore, I've chosen to maintain my original score. --- Reply to Comment 1.1.1: Comment: Thank you for taking the time to provide detailed feedback on our submission. To emphasise the novelty, our method is **the first unsupervised approach capable of segmenting multiple objects on complex real-world videos**, which is valuable on its own since **no previous work has had any success on the YTVIS dataset before**. While we build on previous work for some of the individual components, we made some specific design choices to make it work and the importance of each design choice is ablated in our experiments. We hope that the reviewer appreciates the effort that goes into adapting, integrating, and experimenting with each of these modules to address **unique and novel challenges on real data.**
null
null
null
null
null
null
Distribution-Free Statistical Dispersion Control for Societal Applications
Accept (spotlight)
Summary: This study extends the literature on distribution-free uncertainty quantification by providing bounds for various statistical dispersion measures, particularly those commonly employed in fairness evaluations of algorithms. The authors enhance the theoretical foundation by introducing additional results that allow bounding any functions with bounded total variation on the cumulative distribution function (CDF). Additionally, they propose a numerical optimization technique that achieve tighter bounds than previous methods. Empirical experiments validate the effectiveness of these tighter bounds and their relevance to ensuring fairness in model selection. Strengths: The paper addresses an existing gap in the literature on bounding other functions of the observed loss of an algorithm beyond means and QRBMs [1]. This is particularly relevant in the case of group-based measures that often appear in fairness analyses. The technical results that enable this are insightful and the proposed numerical optimization algorithm looks promising. The comprehensive empirical evaluation of the proposed methodology is commendable and provides valuable insights. [1] Jake C Snell, Thomas P Zollo, Zhun Deng, Toniann Pitassi, and Richard Zemel. Quantile risk control: A flexible framework for bounding the probability of high-loss predictions. arXiv preprint arXiv:2212.13629, 2022. Weaknesses: 1. Significance I am a bit torn on the significance of this contribution. The contribution is two-fold: provide the theoretical underpinnings for bounds of other types of functions on the CDF and provide a numerical optimization method that gives tighter bounds than the truncated Berk-Jones introduced in [1]. However, the technical results are not particularly insightful or novel and the bounds provided by the numerical optimization are comparable with the truncated Berk-Jones ones. On the other hand, the immediate relevance to fairness applications is noteworthy and this practical relevance alone makes the work valuable and suitable for presentation at the conference. 2. Paper structure The writing in its current form lacks sufficient focus and intuition. The authors include many results in the main paper, while relegating most of the technical and experimental insights to the appendix. Consequently, the paper becomes excessively dense in certain sections. To address this, I would suggest narrowing the focus to a select few measures and demonstrating how the novel results apply, while providing intuitive explanations. The remaining measures could be included in the appendix. Similarly, a more extensive discussion on the numerical optimization method would greatly enhance the paper's clarity. By adopting these suggestions, the paper can strike a better balance between comprehensive coverage and presenting key insights effectively. Technical Quality: 3 good Clarity: 3 good Questions for Authors: See weaknesses +: * I understand that the numerical optimization gives tighter bounds, but how do you ensure that these bounds are valid? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Limitations were adequately addressed by the authors. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your thoughtful and overall positive feedback. Here, we provide answers to your concerns point by point. **C1:** I am a bit torn on the significance of this contribution... **A1:** We agree that our main focus is to extend existing methods to address societal applications more than technical novelty. We appreciate your recognition of our work's value with respect to algorithmic fairness. **C2:** The writing in its current form lacks sufficient focus and intuition... **A2:** We agree that our presentation of the material needs to be revised to better balance clarity and technical depth. Thanks for the suggestion about narrowing the focus and also discussing the numerical method more extensively. These will all be reflected in our revision. **C3:** I understand that numerical optimization gives tighter bounds, but how do you ensure that these bounds are valid? **A3:** Thanks for bringing up this point. We addressed this important question in the main paper but will highlight it in the revision. In lines 218-219 we have a post-processing procedure to ensure the bounds are valid. In Appendix A.2.2 between line 558-565, we provide details on how to post-process. This can be implemented simply via a binary search. --- Rebuttal Comment 1.1: Title: Response to Rebuttal Comment: Thank you for addressing my concerns. I maintain that this work will be an important addition to the current body of work in this area. I have increased my score with the understanding that the authors will follow through with point A2 above in the final version of the paper.
Summary: The paper demonstrates how to give distribution-free probabilistic control of certain statistical dispersion measures that are relevant in societal applications. An example is the GINI coefficient. The paper technically builds on and slightly extends the work of Snell et al., which shows how to control quantile-based risk measures. Strengths: Congratulations to the authors on the great paper. It is the best in my batch of 6. I am familiar with the line of work on quantile-based risk measures, and am happy to see it used in such an interesting application domain. The GINI coefficient and friends are a unique set of application domains, with large practical importance. Though I do not necessarily see large methodological novelty from the statistical point of view in this paper, I do believe that it has the potential for large impact in the social sciences (politics/economics, etc.) if the techniques are adopted. Weaknesses: I will be an advocate for the paper's acceptance, but I would really encourage the authors to do a major rewrite during the revision stage. The writing and presentation needs work, in my opinion. I have taken careful notes below. They are a bit stream-of-consciousness, so please forgive me for the tone. WRITING FEEDBACK: Section 4 reads really strangely to me right now. It contains essentially no technical content, but is written in a technical way. It’s more of an advertisement for the appendix. But it is vague, so I do not understand what has been proven, and what technical tools were used to do that until after I read the appendix (which 99% of your readers will NOT do). E.g. Under “Control for absolute and polynomial functions” you say “we can control these” but don’t give a formal statement saying what that means. Also, is it a corollary of some main result, or is it a bespoke proof? We don’t know until we read the appendix. For that reason, I’m not really learning anything from section 4 as a reader, which is not good. If you need to make space, cut 4.3 which is not important for main text. 
There are many places where the math just doesn’t make sense, and false statements are made, which makes the paper confusing. (To be clear, I believe all the statements are formally true, but not always for the reason given.) Examples: (1) “building blocks mentioned above as linear functionals of J: (i) nonlinear functions of J, i.e. \xi(J)”. What is this supposed to mean? J represents either F or F^- so \xi is a functional, not a function right? And you’re saying it’s nonlinear, so how can it be linear? As a side note, this “building blocks” section really didn’t work for me in terms of clarifying the later examples. It is so mathematical! Who are you writing for, a theorist, an applied statistician, or a social scientist? You have to decide, because the latter does not know what a function of a functional is, and the middle one doesn’t care and will probably be turned off by that language in my honest opinion. As an alternative, just say that you are trying to control “functions of a whole CDF, like the GINI coefficient.” You can delineate the exact distinctions later when you’ve already made your main piont. “nonlinear function of a functional” and similar language is definitely distracting, not helping. (2) Under “Control for a monotonic function”, this does not hold because of Proposition 1. Proposition 1 is a deterministic statement, and you need Proposition 3 to make the probabilistic statement. This is confusing because it is obviously false to say that “by proposition 1 you have X with probability 1-\delta”. (3) The statement of Proposition 3 is false. The second line holds only with high probability. (4) In Appendix C, it’s not “By classic DKW inequality” — it should say “By the classic DKW inequality and a Union bound/Bonferroni correction” because the classic DKW inequality does not have a k in it. Also, I find this subsection uninteresting and would consider greatly shortening it by just stating “you can use Bonferroni correction” and then giving the example. The paper strikes a strange balance of being needlessly complex while presenting almost no theoretical detail in the main manuscript. It is fine to defer the most general form to the appendix, but in my honest opinion, the hybrid that is happening now is not very effective. Here are a few suggestions and places where I think the writing is ineffective. (1) Simplify superscripts and subscripts. The letter “F” gets way too many superscripts/subscripts/hats. It is hard for an unfamiliar reader to keep track of what’s happening. But it can be simplified with essentially no loss: for example, because you never write F_{n/2} or F^{delta/2} at all in the main paper, it suffices to remove those scripts entirely. (2) Remove technical minutiae. Remark 1 can be removed and the same point can be made as a half-sentence somewhere. Don’t use J as a generic notation for F or F^-, it’s very confusing. (3) I would add at least one actual technical result in the main body of the paper. Everything is in the appendix right now. I definitely respect the reasoning behind that choice, but right now it really isn’t working. The reality is that the paper is very mathematical, but without some clear theoretical statement (Theorem 2 from appendix would suffice), it’s not clear what the main result is and why it works. Furthermore, things like Proposition 1 do not help. Proposition 1 is obvious (does not require proof) and it isn’t directly connected to any of the claims you make in the paper. It should definitely be in the appendix. Whereas Theorem 2 is your main hammer, so you should highlight it. And you should explicitly state somewhere that the The sentence starting with “Roughly speaking, a function of bounded total variation…” is totally out of place given how technical the previous parts of the section have been. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: N/A, just let me know if I have misunderstood anything above. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We really appreciate your careful examination, thoughtful and overall positive feedback. We will address your comments below. **C1:** ...I would really encourage the authors to do a major rewrite during the revision stage. The writing and presentation need work... **A1:** Thank you for your suggestions on presentation, in particular with respect to Section 4. We agree with your assessment of this section and plan a significant rewrite incorporating the suggestions. Meanwhile, for $\xi$, we realize that our formulation can be confusing. $\xi$ can be viewed as a function of $F$ or a function of $F(x)$ for some fixed $x$. If we view $\xi$ as a function of $F$ that induces $\xi(F(\cdot))$, then it is a function of a function, i.e. a functional. But $\xi$ is also a mapping from $R$ to $R$ once we fix an input $x$ and view $F(x)$ as a scalar. We will rewrite and clarify this point. We will also add $1-\delta$ in proposition 3 and simplify the subscripts and superscripts for estimators of $F$ in different cases. Again, thanks for your careful examination! As for Appendix C, we indeed should say ``by classic multi-dimensional DKW inequality in [2]". Appendix C meant to say we should take the best between multi-variate DKW and Frechet-Hoeffeding bounds and we will shorten that as suggested. Lastly, for a general answer, we agree with all the points brought up. Specifically, we agree that Section 4, for example, Section 4.1 is written in a hand-waving way because we put all the details in Appendix. We are also grateful for your suggestion about removing remark 1 and writing out the formal statement of Theorem 1 in a technical and formal way as suggested. These will all be reflected in our revision. [2] Naaman et. al. "On the tight constant in the multivariate Dvoretzky–Kiefer–Wolfowitz inequality" --- Rebuttal Comment 1.1: Title: Great job again Comment: Looks like the paper is in good shape. I've read the response and get the sense that the authors will make a significant revision to the writing, which was my main concern. I see no reason the paper should not be accepted at this point. (But please, do follow through on the feedback.) Congratulations again. --- Reply to Comment 1.1.1: Title: Further response to reviewer Jihe Comment: Thank you so much for your kind words and helpful suggestions!
Summary: Given a trained predictor, this paper proposes a framework for bounding various measures of dispersion of the loss distribution. The approach taken here is similar to that of conformal prediction, where one uses a calibration set to estimate the distribution of the loss. In this paper, the authors use the empirical CDF of the loss function on the calibration set to build confidence bands for the true (unknown) CDF. These confidence bands can then be used to obtain bounds on the dispersion measure of interest. This approach is demonstrated on several measures of dispersion: Gini coefficient, Atkinson index, difference of group medians, and others. Strengths: * This paper addresses an important class of problems. * The proposed methodology has a broad scope as it is applicable to any ML system and any data distribution (so long as it is i.i.d. and no distribution shift has occurred). * The method should be easy to incorporate into standard ML packages and workflows. Thus, it has a high potential impact on ML practice. * In terms of originality, this is a moderate contribution and seems mostly like a follow-up paper to [1]. However its applicative focus is sufficiently different. Also, it contains some technical novelty. Chiefly, the gradient-based optimization approach for the construction of tight confidence bands for a given dispersion measure. * On the experimental front, the proposed approach is demonstrated on a diverse set of dispersion measures and data sets, showing its real-world applicability. [1] Jake Snell, Thomas P Zollo, Zhun Deng, Toniann Pitassi, Richard Zemel. "Quantile Risk Control: A Flexible Framework for Bounding the Probability of High-Loss Predictions" ICLR (2023). Weaknesses: The results in this paper are based on the construction of confidence bands for the CDF of the loss and in particular on the use of goodness-of-fit statistics for the construction of these bands. This idea is well-known in the field of statistics and the authors should do a better job of connecting with that literature. In particular, the paper [2] from 1995 proposed constructing confidence bands for the CDF using the Berk-Jones statistic, just like the baseline used in the current paper, but is not cited. Minor points for improvement: 1. The fonts used in the figures are a bit small. 2. The notation in Section 2 is confusing. I believe it would be better to use the standard notation: X for inputs and Y for outputs. Z can be used for loss(h(X),Y). 3. The last paragraph of Section 3 is a bit much. Consider breaking it down and providing specific simple examples for clarity. 4. Page 4, line 139: a space is missing in "measures.They" [2] Art Owen. "Nonparametric likelihood confidence bands for a distribution function". Journal of the American Statistical Association (1995) Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: * In many applications, one is given a finite dataset to work with. In which case, under the current paper's framework there is a need to decide how to split the data into a primary training set and an additional calibration set (which the authors refer to as validation set, however this term is different from that used in relation to cross-validation). Are there any guidelines that you can provide on this matter? e.g. minimal size of the calibration needed set to achieve a certain error. * Using a fully-connected NN to parameterize the confidence boundaries in the optimization seems like overkill. I suggest that the authors try a much simpler parameterization, such as low-degree polynomials or using the first few elements of a Fourier series. This should be sufficient to express any smooth boundary accurately and is likely to converge faster due to the smaller number of parameters. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: * This paper only considers one-sided confidence bands for the CDF and the bounds on dispersion that result from them. In some cases it may be sensible to consider two-sided confidence bands but this point is not discussed in the paper. It seems that the repeated integration method for one-sided bands on which this paper is based can only be used for computing one-sided bands. I think this limitation should be at least mentioned. One could approximate a two-sided 1-alpha level boundary by combining an upper and lower band at the level of 1-alpha/2. However, this is not optimal (and is also not mentioned in the paper). Note that there are methods in the literature that compute two-sided confidence bands for the CDF that are based on similar ideas and they could be incorporated in some manner into the framework presented in the paper (but this is too much to ask for in a revision!). * Distribution shift is not mentioned anywhere, though I don't see this as a clear limitation of the proposed framework. Depending on the type of distributional change, one might be able to construct wider confidence bands that take it into account. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your constructive suggestions and overall positive feedback. We will revise our paper according to your helpful suggestions and will include the important citations we missed. Here, we provide answers to your other concerns point by point. **C1:** In many applications... under the current paper's framework there is a need to decide how to split the data into a primary training set and an additional calibration set (which the authors refer to as validation set... are there any guidelines that you can provide on this matter? e.g. minimal size of the calibration needed set to achieve a certain error. **A1:** While data splitting was an important issue in some older work in Conformal Prediction, we adopt the setting used in most recent work in distribution-free uncertainty quantification (DFUQ) [1,2]. In this setting, we start with a pre-trained blackbox model, and only need to use a validation dataset to calibrate and choose a hypothesis (for example, a threshold), without considering the training process for the blackbox model. [1] Snell et. al. "Quantile Risk Control: A Flexible Framework for Bounding the Probability of High-Loss Predictions." [2] Angelopoulos et.al. ``Learn then Test: Calibrating Predictive Algorithms to Achieve Risk Control." **C2:** Using a fully-connected NN to parameterize the confidence boundaries in the optimization seems like overkill. **A2:** Thank you for the suggestion. We implemented a polynomial model with different levels of complexity for comparison. For $L_1,L_2,..., L_n$, we sample $n$ one-dimensional Gaussian seeds denoted as $s_1,s_2,...,s_n$. Then: $\phi_{\theta}(s_i)=\theta_0+\theta_1 s_i + \theta_2 s_i^2 + ... + \theta_k s_i^k.$ Results with the polynomial model after some hyperparameter tuning are reported in Table 1 in the PDF attached in the aruthor rebuttal. Overall we see mixed performance from the polynomial parameterization. It produces the best bound on the smoothed-median metric, and performs better than Berk-Jones on CVaR. However, it fails to produce a better bound than Berk-Jones for the other two metrics. Overall, we find the neural network to be easy to implement and optimize -- importantly, the same set of parameters are used for learning bounds on all target metrics in Sections 5.1.1 and 5.1.2. Further, the networks can be trained in a few minutes on a single small GPU. While other parameterizations are possible, and sufficient hyperparameter tuning may lead to better performance across the board for the polynomial method, we believe the neural network is a good choice based on the robust results it produced in our experiments and the reasonable compute and memory costs. **C3:** This paper only considers one-sided confidence bands for the CDF and the bounds on dispersion that result from them... **A3:** Our framework indeed can provide two-sided bounds. This is illustrated in Figure 1, where we plotted two-sided bounds for Lorenz curve. Lemma 1 converts the lower bound of CDF obtained by [1] to an upper bound; both bounds hold simultaneously without incurring any inflation factor. We will clarify this in the revision. **C4:** Distribution shift is not mentioned anywhere, though I don't see this as a clear limitation of the proposed framework... **A4:** We did mention the assumption of consistency of the validation and test distributions as a limitation of the work. We agree that this is an important consideration. --- Rebuttal Comment 1.1: Comment: Overall I am happy with the rebuttal. Considering all the other reviews and your stated intentions of revising the text I think the resulting paper would be very nice. I have one comment and one question: (C2) It is great that you tried running a simple parametric representation. I think it would be helpful to have a plot in the revised Appendix that compares the bounds obtained using polynomials to the other approaches. (C3) Could you please clarify this here? If you construct a 1-alpha two-sided band by taking a 1-alpha/2 lower band and a 1-alpha/2 upper band then the two-sided band you obtain would be conservative (i.e. it would hold with probability greater than 1-alpha). --- Reply to Comment 1.1.1: Title: Additional response to reviewer 6JyR Comment: Thanks for your feedback. For (C2), yes, we will include that in our revision. For (C3), thanks for the question. First, it is true that typical two-sided bounds statements in literature are with probability $\ge 1-\alpha$, which is standard in applying concentration inequalities though might be conservative. But we guess you are asking a slightly different question and here is our tentative answer to your possible concern. Please let us know if it doesn't address your concerns and we would love to further try our best to clarify. We guess what you are concerned about is the following problem: if one only considers one-sided bound, **for example**, $1-\alpha$ one-sided DKW inequality for CDF $F$, then $\hat F^\alpha_{n, L}=\hat F - C\sqrt{\frac{\ln(1/\alpha)}{n}}$. However, if we consider two-sided bounds with $1-\alpha/2$ for the upper band and lower band, then $F^{\alpha/2}_{n,L} =\hat F - C\sqrt{\frac{\ln(2/\alpha)}{n}}$ , $F^{\alpha/2}_{n,U}=\hat F +C\sqrt{\frac{\ln(2/\alpha)}{n}}$. From here, we can see the band obtained by $\hat F^{\alpha/2} _ {n,L}$ is wider than $\hat F^\alpha _ {n,L}$. This above problem can be addressed by our Lemma 1 because we consider constructing bounds via $L_1,...L_n$. Specifically, our Lemma 1 states that any $1-\alpha$ lower bound constructed by $L_1,...L_n$ for $F$ (upper bound for $F^-$) can be transformed to an upper bound for $F$, and the upper bound will automatically hold as long as the lower bound holds. That means with probability at least $1-\alpha$, both upper and lower bounds hold and the lower bound will not be worse compared to the construction in [1] for one-sided lower bound.
Summary: The paper proposes a methodology for distribution-free confidence intervals for a wide class of dispersion measures. This is motivated by validating the performance of machine learning algorithms with respect to more complex notions of performance, such as group-based measures of loss balance or the Gini coefficient of the allocation of the losses. This paper studies the problem in more generality than previous work, which focused on losses that are empirical means of quantities or quantile-based risk measures. The paper demonstrates the proposed methodology on several solid benchmark tasks. Strengths: The problem is well-motivated, and the need for such guarantees about (dispersion) risk measures is convincing. Such techniques would help audit and verify aspects of the performance of machine learning algorithms. I find the introduction and abstract to be clear and set the scene well. The work involves some non-trivial mathematics, and it is a formidable technical extension of previous work. A reader of previous work would not be able to apply it to the risk measures studied herein without this manuscript. The examples are serviceable and demonstrate the breadth of the proposed method. Weaknesses: My main concern is with the technical presentation in Section 4. I am fairly knowledgeable about this literature, but I still found it challenging to read at first. The authors have a lot of technical content to cover with limited space, so right now much of it is deferred to the appendix. I understand that this is challenging, and I appreciate all the work that has gone into this draft. Nonetheless, I hope this can be improved in the next iteration. In general, my opinion is probably too much content for the amount of space that is currently allocated, so perhaps more space can be made for this section. My first suggestion is that the authors state the desired property of the output of their algorithm explicitly at the beginning of Chapter 4 or perhaps earlier -- if I understand correctly the goal is to give a 1-delta confidence interval for some risk measure. The output of the algorithm is this confidence interval. This is in contrast to some other works in risk control and conformal prediction, where the output of the algorithm is a choice of a parameter that controls the coverage rate or other risk level. When the authors of this work use the word "control" in the abstract and at various other points, I was initially confused as to what they meant. Secondly, it's not immediately clear what the proposed algorithm is when reading section 4. The section seems quite compressed, and as a result the reader is left with a lot of work to put together the pieces from the relatively lean presentation. I would find it helpful to see an algorithm environment or pseudocode explaining exactly the steps to compute the upper bound for at least one of the cases considered. It goes without saying that when it comes to presentation choices, it is somewhat a matter of taste. I understand the authors may wish to go a different direction than my suggestions above. Still, iterating on this initial draft would be very helpful. In good faith, I'm giving the paper a score of "7" in anticipation that the final version incorporates some improvements to the presentation. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: Theorem 1 seems like the technical core. Is this impression correct? If so, more commentary about the functions f_1 and f_2 would be helpful. These exist in principal, but for which cases are they actually tractable? There is currently some notational pain with the overloading of \xi. Sometimes it is a function R to R, sometimes it is a functional. 4.1 is titled "Control of nonlinear functions ..." but then starts off with "xi(F^-) which maps F^- to another function of R", which is interpretting it as a function. However, later on it is said that xi is monotonic, which is invoking the interpretation of xi as a function R to R rather than a function. Similarly, 4.2 is about "control of nonlinear functionals" but then in the integral expression in line 183, F^- is evaluated at alpha, which means this expression is uses \xi as a function, not functional. I don't doubt that it's all correct, but it was hard for me to read at first. It would help the object \xi is defined explicitly somewhere, so that exactly what type of object (function vs functional) it is is stated. Minor: -- Berk-Jones and DKW should be cited. -- I know of two other references in the conformal prediction literature that use uniform bounds on the CDF like Berk-Jones; https://arxiv.org/abs/2104.08279 and https://arxiv.org/abs/2304.06158 . These works are somewhat technically related, although I understand the use of the bounds on the loss distribution is conceptually different than using the bounds on the conformal score function. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: The authors do a good job explaining the context and limitations of their work. I would say the main limitation is that these techniques (like most techniques in this field) do not handle distribution shifts, and are assuming an iid data sample. This is explicitly addressed in the conclusion. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your thoughtful and overall positive feedback. Here, we provide answers to your concerns and questions point by point. **C1:**...main concern is with the technical presentation in Section 4...have a lot of technical content to cover with limited space, so right now much of it is deferred to the appendix... **A1:** Thank you, we agree with your assessment of this section and plan a significant revision. **C2:** My first suggestion is that the authors state the desired property of the output of their algorithm explicitly at the beginning of Chapter 4...if I understand correctly the goal is to give a 1-delta confidence interval for some risk measure...This is in contrast to some other works in risk control and conformal prediction, where the output of the algorithm is a choice of a parameter that controls the coverage rate or other risk level.... **A2:** We use the term control in the same way as these other works -- our framework can be applied to choose a hypothesis that best controls some risk measure. We will clarify this in the revision. We would like to point out a distinction between how we exercise this control versus other recent approaches, notably ``learn then test" [1]. In both, as you state, the most important step is to provide a ($1-\delta$)-confidence upper bound for some risk measure for a given hypothesis $h$. In [1] they first fix a risk level $\alpha$ and output all hypotheses whose risk (their risk is defined as the mean of some loss function) is upper bounded by $\alpha$. Our algorithm iterates over all members in $\mathcal H$ and selects a hypothesis $h$ corresponding to the smallest risk upper bound. But we our framework can naturally be used in their setting since we also obtain bounds for risks and those bounds hold simultaneously for all hypotheses with high probability. [1] Angelopoulos et.al. ``Learn then Test: Calibrating Predictive Algorithms to Achieve Risk Control." **C3:** ...it's not immediately clear what the proposed algorithm is when reading section 4. The section seems quite compressed, and as a result the reader is left with a lot of work to put together the pieces from the relatively lean presentation. I would find it helpful to see an algorithm environment or pseudocode... **A3:** Thanks for the great suggestion to summarize our algorithm in pseudo-code -- we will include this in our revision. Our algorithm can utilize any method that produces high probability two-sided bounds for the underlying CDF, where Berk-Jones and our numerical optimization bounds are two examples. The pseudo-code will make this clear. **C4:** Theorem 1 seems like the technical core. Is this impression correct? If so, more commentary about the functions $f_1$ and $f_2$ would be helpful. These exist in principal, but for which cases are they actually tractable? **A4:** You are right that Theorem 1 is one of our main results. This result can extend existing results in the literature to bound the loss beyond means and quantile-based risks. We have a formal version of Theorem 1 in Appendix line 513-517. $f_1$ and $f_2$ are quite tractable, under mild assumptions. For example, if $\xi$ is continuously differentiable, $f_1(x)=\int^x_0|\frac{d\xi}{ds}(s)|ds$ and $f_2(x)=f_1(x)-\xi(x)$. **C5:** There is currently some notational pain with the overloading of $\xi$......Berk-Jones and DKW should be cited... **A5:** Thanks for your other suggestions for revisions and notation clarifications. And for the pointers to missing citations. We will include these in our revision.
Rebuttal 1: Rebuttal: We thank all four reviewers for their constructive and positive feedback, and for thinking our paper addresses important applications, has the potential for significant impact on social science, and addresses existing gaps in the literature on bounding the observed loss beyond means and quantile-based risks. We are very grateful for the extensive useful suggestions on how to improve the presentation, especially for Section 4. Although we cannot upload a revised draft during the rebuttal phase due to the NeurIPS policy, we plan to modify our paper accordingly. We also address concerns (**C**) and provide our answers (**A**) for each reviewer individually below. In addition, we attach a PDF for extra experimental results. Pdf: /pdf/f215b1adae74ae93cf8edb5fadbb60e7434ca7f6.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
GUST: Combinatorial Generalization by Unsupervised Grouping with Neuronal Coherence
Accept (poster)
Summary: This paper uses findings from vision sciences research to provide an image-segmentation model that could be used in ANNs. The “grouping with temporal coherence” assumes that grouping information emerges via spiking synchrony. This is implemented in their network, called GUST. The network is composed of two main parts: a spike-coding space (SCS) and a denoising auto-encoder (DAE). The DAE provides delayed feedback to the SCS. Unsupervised training enables GUST to detect objects with neuronal coherence. The neuronal coherence is evaluated through a metric that measures distance between spike trains of populations of artificial units conceptualized as neurons. The system is iterative and is shown to generalize as well. Strengths: The paper attempts to address a very important question in computer vision/neurosci. The network is brain-inspired and does not require extensive training. GUST shows through training that it gradually learns to segregate the scene into a self-organized temporal structure and represent the building blocks of a scene by “neuronal” coherence. This can have major implications for computer-vision models and possibly be used to resolve the difficult problem of identifying objects in cluttered scenes. Weaknesses: Although this work has a broad audience, the framing of the question is rather poor. The authors start by talking about the binding problem, but then they constantly give examples that are about cluttered visual scenes. The binding problem is more general and can be applied to e.g., grouping of points that belong to a circle. Some assessment of the sensitivity of their model vs. for example RC-bind is missing (e.g., showing the performance as a function of %overlap). The comparison is only made for generalization to a different number of segments (Figure 6). Also, they don’t explain whether/how this can be combined with deep-net models of visual object recognition. From the neuroscience point of view, the authors seem to take for granted that the neuroscience evidence supporting perceptual grouping or binding by synchronized activity of neurons, but in neuroscience this is far from being accepted. It is indeed controversial. While it is Ok to present it as a neuroscience theory with credit and evidence in favor, it should also be mentioned that this theory is controversial and that there are alternative theories. Better balance would be highly beneficial. Technical Quality: 3 good Clarity: 3 good Questions for Authors: What exact computer-vision problem is this resolving? It should be clear that this paper is about image segmentation and not perceptual grouping. Perhaps the most relevant literature in neuroscience is object recognition in cluttered scenes. Why could this not be accomplished by deep neural network models with a lot of training data? A simple intuitive answer is lacking. Under what circumstance does the grouping by temporal coherence fail in realistic cases? For example, choices like tau2 might not be wise or it might fail in speedy segmentation. In Figure 6 (b-d), it seems like all models, including GUST, slightly perform better for larger number of test images. This seems counter-intuitive to me. Can the authors explain why the performance does not drop in their model? I find it very interesting that the “neuronal” coherence could encode the grouping uncertainty. This feature is not exploited in any of the simulations though. Do you think that you can emphasize more this? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The limitations are not deeply or explicitly elaborate. This is problematic. Moreover, existing alternative algorithms are mentioned, but they are not systematically compared against the new work presented by the authors. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: __A1. Framing of the question__. The major problem this paper focuses on is indeed the binding / grouping problem (in ANN / neuroscience literature), which is clearly framed in Section 2 in main text. The review paper [7] in main text frames binding problem as representation, segregation and composition (__Fig 3 in global pdf__) and this paper focuses on the first two, which is central to the binding problem. It seems that what really concerns the reviewer is the relevance between the grouping problem (‘more general’ as reviewer argues) and the task we demonstrated (‘cluttered visual scenes’/’image segmentation’/'classification' as reviewer suggested). On the one hand, the binding problem _in neuroscience_ is famously exemplified as grouping features of cluttered visual objects, so called superposition catastrophe (See ref [6][7][11] in main text, copied in __Fig 3 in global pdf__); On the other hand, studying binding / grouping problem via cluttered visual scenes is common in a list of works in _ANN field_ on binding (See [14][48][7] in the main text). While the task seemingly bears the similarity with image segmentation / object discovery in ANN field, they focus on different challenges (we design GUST for general grouping problem challenges [unsupervise, multistable, common format...] with desirable features of general sense [denoising mechanism], instead of to conquer specific static image segmentation). Since the architecture in this work do not have explicit constraints / expert knowledge on the modality/semantics of data (eg. classical image segmentation algorithms like MRF or supervised segmentation), the mechanism could be generalized to other modalities or more abstract high-dimensional representational space (argued in Section 2 in main text). Thus, the visual task in this paper is for vividness and should not loss the generality. For a systematic review, please see ref [7] in main text, which could resolve most of the concern about the background knowledge (how to ground binding problem in ANNs field, w.r.t related topics like segmentation and object discovery, in neuroscience, and in Gestalt psychology). For example, as to the reviewer's __concern__: ’ _grouping of points that belong to a circle_ is also binding', it is actually specific principles (proximity, similarity, closure, symmetry, etc) in Gestalt Psychology. However, in Gary’s work [1], it is argued that various Gestalt Laws are all special cases of a single information-theoretic grouping principle, which is what we focus on. [1] Gary Hatfield and William Epstein. The status of the minimum principle in the theoretical analysis of visual perception. Psychological Bulletin, 1985. __A2. Relation to deep learning__. __The intuitive answer of why grouping challenges deep-nets__ is explained in the second paragraph of Section2 in main text and reviewed in detail in ref [7] in main text. On the specific question of why large data can not solve grouping is mentioned in the paper (line28): ‘the cost of representing all possible combinations statically and locally is exponentially high’. While the specific task seems easy to be conquered by specific ANN models with supervised training, we focus on solving the task with the model of desirable feature that is of general sense, so that it has potential to solve a family of tasks( see A1), which is a hard problem. __The way GUSTs could be combined with deep-nets__ is diverse and is explicitly discussed in A.1 of the appendix (line 76, 99). Briefly, generative model and denoising is very basic idea in deep learning, indicating possibility to link to more advanced models (eg. diffusion model). Also, the spiking activity in SCS could be readout to ANNs (to solve downstream tasks like _recognition_) by HU proposed in ref [56] in main text. __A3. Concern for more results__ For analysis of __overlap effect, see A.3 in global response__. Besides, as far as we understand, for single level grouping, varying 'object number' is complete for composition. For more grouping results, see A.12 in appendix. __A4. Controversial of binding by synchrony__. Actually, two main-streams of binding theories are synchrony-based and attention-based (FIT theory), which is briefly mentioned in appendix (line123). We actually combine the two sides in our model. We will balance the two sides better when we revise the paper. __A5. Failure case__. Failure case has been shown in Figure 12 in appendix, when overlap is sickly heavy. Moving case is also challenging (dual use of time, See A.12.5, Figure 18 in appendix). In A.12.5, we also discussed the _speedy_ consideration (line 538, line545, line 574). For $\tau_2$ , as Figure 6 in main text shows, same $\tau_2$ accounts for grouping varied number of objects. It is the grouping dynamics that has the 'wisdom' (flexibility). __A6 Counter-intuitive common fate of all models__: Object number could have slight effect on overall overlap ratio among objects(__Fig1 in global pdf__). Heavier overlap tends to cause more unbalanced cluster size. Since AMI favors unbalanced situation, it tends to have slightly better score. Despite of slight bias, it is still clear that GUST generalizes in different cases (~0.6) and outperforms benchmarks in each case. __A7 ‘Grouping uncertainty’__. The coherence level can be readout by a coincidence detector (HU in ref[56]) to help downstream tasks. In general, such coherence acts as an internal indicator of 'good' solutions so as to enable communication / transmission to downstream processes. For concrete example, see __A.8 in global response__. __A8. Response to limitation__. The limitation is explicitly elaborated in A.1 in appendix. We compared our work with RC-bind and slot attention as benchmark in this paper. In A.9.6 in appendix, we provided more detailed discussion of their relevance. While Slot attention is one recent SOTA on color image, we can add more related models when we revise the paper. --- Rebuttal Comment 1.1: Comment: Thanks for these answers. In the light of the replies and of the work done in response to requests of others, I am happy to raise the score to 5. I remained concerned about the apparent lack of awareness of the authors that the binding or grouping by synchrony theory is at best controversial in neuroscience (see e.g. the recent review of Roelfsema in Neuron 2023). The authors should give a less biased view. --- Reply to Comment 1.1.1: Title: Response to Reviewer 9PtM Comment: We are grateful for the reviewer to introduce the review paper, which provides recent evidences against the temporal binding theory. We are aware that binding by synchrony is controversial in neuroscience all the way along the history and we are happy to take the advice. Specifically, we have provided a more balanced review in the _Related Work Section_ in the revised main text. Besides, we also aim to integrate synchrony and attention in a single binding framework in future, so as to account for more general binding. Thus, the introduced paper is very helpful for us.
Summary: The paper introduces GUST (Grouping Unsupervisely by Spike Timing network), a network architecture inspired by the human brain that aims to address the challenges of grouping sensory information in artificial neural networks (ANNs). The network incorporates biological constraints to bias the network towards a state of neuronal coherence, reflecting grouping information in the temporal structure of spiking activity. GUST is evaluated on synthetic datasets and demonstrates the ability to learn and represent grouping information from unsupervised objectives. The model progresses from perceiving global features to capturing local features and systematically composes learned building blocks to represent novel scenes. The paper highlights the advantages of grouping with temporal coherence, such as flexibility, dynamism, and reduced representational cost. GUST overcomes challenges related to non-differentiable dynamics, high temporal resolution, unsupervised learning, and explainability. It contributes to bridging the temporal binding theory with ANNs and enables the grouping of objects directly from multi-object inputs. The paper also presents a clustering method to evaluate neuronal coherence based on precise temporal structures. Strengths: - The paper very clearly presents the formalism in that feels very intuitive and tractable. It is well written and clear. Despite how long the appendix is, the authors did an excellent job of keeping things self contained where limited reference to the appendix is needed to understand the paper -The paper incorporates really interesting biologically inspired structures into the architecture. Most interestingly, they incorporate the udse of coincidence detectors in a spiking model convolved with feedback information from the DAE to build a model which can group representations of object shapes in an unsupervised way. It is a really interesting architecture and I feel the model is an insightful architecture. - The background presented is very well motivated. It discusses experimental findings and key results in the field on the experimental and the modeling end which motivate the work in the rest of the paper. Weaknesses: - While this paper is very technically involved (and I did not read the whole appendix as a 26 page appendix is a bit unreasonable), I did thoroughly enjoy this paper and the ideas it integrates into the model. - The authors do not appear to compare to some comparable architectures on this task with the same metric. But this is a minor weakness. It is a little difficult to know how to orient oneself when reading this model relative to the literature. However, the novelty of the architecture is more interestng. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: -The modeling of the CD seems a bit confusing to me. The authors call it a coincidence detector but it appears that s' is 1 when a given neuron has rapidly tired between two time steps (this is also confusing to me since the neurons have refactory periods so my understanding is if s_i(t) = 1 then s_i(t-1) can't be 1 . If this is the case, the term coincidence detector seems a bit misleading if it is considering the coincidence of one neuron across time? Or is it the coincidence of multiple neurons? Which would made sense in the neural coherence picture, but if this is the case then should it be s' = CD(s_i(t),s_j(t-1))? Could the authors clarify this? I'm not sure if I am misunderstanding something or if it is a confusion of notation. -In section 3.2, the authors state: "The refractory dynamics provides the essential temporal competition to separate groups of different objects and acts as a structural bias for a grouping solution". Could the authors better clarify what they mean by this? How does the refractory period provide competition? - Small typo in header of 3.5. Gredients should be Gradients. - Could the authors elaborate and clarify why "The GUST is simulated 3-times longer than training to confirm the stability of the grouping.". This is a technical thing I am missing, but if the authors stop training at a particular time, then what is it that continues to change - is it the weights in the DAE are frozen but the model continues to receive inputs to output a score? - In figure 5d right, where it shows the spike firing sequence, to be very clear, are the individual columns the same neuron firing across time after that particular epoch? Or are they individual neurons (so 10 neurons) firing. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 4 excellent Contribution: 4 excellent Limitations: The authors thoroughly address limitations in the appendix. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate the reviewer to spend the valuable time for carefully reviewing the paper and providing the inspiring comments and helpful suggestions. We appologize for the confusions the paper raise to the reviewer, which will be clarified as followings. __A1. The response to concerns on related works__. In this paper we provide the RC-bind and slot attention model as the benchmark, which is of comparable recurrent architecture (attentional feedback) and of similar metric (AMI). As discussed in the related work (Section 5), the general computational problem is related to object-centric representation or object discovery in CV. Therefore, the general architecture is more or less related to this line of models (‘slot grouping’ in Section 5). RC-bind and slot-attention are two representative models along this line. However, as the reviewer points out, the architecture of this work bears the novelty compared w.r.t this line of works in deep learning, partly because they have different representation assumptions (pre-designed slots vs emergent neuronal synchrony), which is central for binding problem. __A2. The response to the concern about CD__. The coincidence detector in neuroscience literature stresses two things: First, the decay is very fast so that only coincident arriving spikes within a very narrow time window could be integrated; Second, the threshold is low so that the detector neuron can be activated by only a few spikes within the integration window. Here we take these ideas: the integration time window is narrow ($\tau_w=2$) and the threshold is low (one spike is able to trigger the neuron). That is partially why we term this non-linearity as CD. However, as the reviewer concerns, in our model, each single CD (for $s_i$) is applied in such a simplified case that the original interpretation degrades into a short-term integrator of $s_i$ itself, with narrow time window and low threshold. However, all CDs (taken together) still acts as a coincidence detector / coincidence filter of the _population activity_ of SCS. The inputs to the DAE (for non-linear processing), is not a single slice of spiking patterns but narrowly integrated spikes (by CD). So that only coincident spikes $s_i(t), s_j(t-1)$ is taken into account by the downstream DAE. This is exactly the function of coincidence detector in the cortex, to distinguish the coincident spikes from averaged dispersed spikes for downstream processing. In sum, we term eq 6 as (generalized) CD because it captures the core feature of CD (narrow window and low threshold) and also function as a coincidence filter to encourage the emergence of synchrony during learning (Section 4.2). Therefore, it captures more about the picture, either for model itself or bio-correlates of the model. Besides, the simple formulation of eq6 could naturally be extended to consider a local region of neurons (but it is not necessary in this model), so that the coincidence of multiple neurons is explicitly taken into account. For example, in redundant/coarse coding scheme, there could be a column of neurons of similiar response property at each location of SCS, instead of a single neuron at each location. In this case, each CD can be extended to be a non-linear low-threshold function of this column of neurons instead of a single $s_i$ so that multiple coincident spikes is taken into account (coincidence spikes indicate the confidence of the presence of objects/features). __A3. The response to the refractory period__. While the top-down modulation by DAE provides the positive feedback (reinforce the original pattern), which is needed to construct a synchrony state. But to group multiple objects, we need different synchronized neuronal groups to alternate. Without competition (negative feedback), the winner pattern will dominate forever. Here, refractory period enforces a hard temporal competition of neuron itself: if the neuron fires at some time, it cannot fire later. Therefore, different possible ‘timings’ compete with each other to fire the spike for each neuron. Since the objects are grouped at different timings, each timing point is a _possible_ candidate grouping ‘slot’. Refractoriness provides competition among these ‘candidate timing slots’, which we call temporal competition. Since refractory neuron is a structural constraint, it acts as structural bias to avoid WTA pattern and encourage synchronized neuronal groups to alternate. It is notable that, here, refractoriness is a special type of self-inhibition. In the brain, similar temporal competition/self-inhibition could possibly be provided by inhibitory neurons, not restricted to refractory mechanism, which is also plausible to induce various types of oscillations. __A4. The response to the concern on simulation period__. We provide the detailed explanation of simulation length in A.9.1 of the Appendix: ‘simulation length’ and ‘additional results’ (also see Figure5/6 and Table2 in the appendix). During training, the simulation length is 100 (Table2 in appendix). We keep the simulation length relatively small because it is more efficient to backpropagate the gradient and it saves the training time. During testing/visualization, the model indeed tends to converge during the first 100 time-steps (Figure 5 in the appendix), but even longer simulation time could allow the model to converge more completely. That is why we use 3-times longer length for testing and visualization. On the other hand, it also confirms the stability of the model. __A5. The response to the concern on figure5__. Each small-square is a snapshot of SCS population activity (marked as ‘s’) or the attention feedback of the same dimension (marked as ‘$\gamma$’). Each row of small squares is last-10 time-step simulation of the population activity (after the epoch). More detailed discussion can be found in A.9.3 of the appendix.
Summary: The research topic of this study is development of a model that can learn to group (segment) the pixels in an image into objects in an unsupervised manner and in a way to enable systematic (combinatorial) generalization with respect to the number of objects. Toward this goal, the authors proposed a model, named GUST, by taking inspirations from the mechanism of human brain. The proposed model consists of two types of neural networks, spiking neural network and denoising autoencoder, that implement an iterative bottom-up/top-down processing together with several other brain-inspired components. The authors selected a simple loss function corresponding to image denoising for unsupervised training, and proposed a strategy for end-to-end gradient-based traing and also a clustering method for grouping (segmentation). The effectiveness of the proposed method was tested with synthetic images containing simple 2D objects. The proposed method has shown superior generalization performance over two competitors with respect to the difference in the number of objects during training and testing. The characteristics of the learning process and role of each component were also empirically studied. Strengths: Unsupervised segmentation (grouping) itself is a challenging computer vision task, and also can be the important first step in more complex visual tasks. Systematic (combinatorial) generalization to new situations is a hallmark of human intelligence yet to be achieved by machine learning models. The result that a model consists of spiking neural network and denoising autoencoder with several other brain-inspired components can be effectively trained to solve segmentation task in a systematic generalization setting in an end-to-end unsupervised manner will be of interest to the NeurIPS audience. The ideas and implementations of the brain-inspired components are explained fairly well, and the actual code is accessible via the URL provided in the Appendix. The authors provide empirical analysis about the learning process and the effect of the brain-inspired components (ablation study) in addition to the simple performance report, which gives additional values to this study. Although the generalization performance is still not perfect and the experiments are conducted with simple synthetic images, this study shows an interesting research direction worth pursuing. \# I took additional experiments provided in the Appendix into consideration when I evaluated the variety of experiments. Weaknesses: A relatively weak point of this study is the difference from Zheng et al [38, in References for the main body of the paper]. Although the proposed model (GUST) is advanced compared to DASBE [38] in multiple aspects as stated in Section 5 (Related work), the core idea of combining spiking neural network and denoising autoencoder in brain-inspired manner and using the temporal coherence for grouping is proposed in DASBE. [38] Hao Zheng, Hui Lin, Rong Zhao, and Luping Shi. Dance of SNN and ANN: Solving binding problem by combining spike timing and reconstructive attention. In S. Koyejo, S. Mohamed, A. Agarwal, D. Belgrave, K. Cho, and A. Oh, editors, Advances in Neural Information Processing Systems, volume 35, pages 31430–31443. Curran Associates, Inc., 2022. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Question Are there any other differences between your study and Zheng et al [38] than the six aspects stated in Section 5 (Related work)? Major suggestions 1. Describing further differences from Zheng et al [38] (if any) If the answer to the above question is yes, it is beneficial to describe the additional differences in the paper (either in the main body or in the Appendix). The differences are not necessarily to be about GUST and DASBE but can be about other aspects of the studies (papers). If any of the already stated six aspects can be further detailed to emphasize the novelty of this work, it would also be beneficial. 1. Clarifying explainability issue In the paragraph staring at line 73, four challenges are stated; the last one is explainability of the representation. However, it seems (to me) that the answer/discussions about this one is not clearly stated in the paper, at least compared to the other three. It would be better if this foreshadowed issue could be highlighted with explicit keywords like "explainability"or "explanable" in the latter part of the main body of the paper. 1. Adding pointers to the Appendix in the main body of the paper Although the Appendix of this paper contains rich contents, currently it is mentioned only three times in the main body (without section numbers). Additional appropriate pointers to the Appendix in the main body would be beneficial for the readers. (For example, I had a concern about the limited variety of inputs in the experiments when I first read the main body, but later I realized that results with other inputs are provided in the Appendix.) Please also refer to the Weaknesses section above and Limitations section below. Minor questions and suggestions 1. Figure 2 (a) lacks $x$ and Figure 2 (b) lacks $\tilde{x}$. It is better to connect these two in figures. 1. Is $\tau_w$ in line 177 a typo of $\tau_1$? If not, what is $\tau_w$? 1. The code availability is better to be stated in the main body of the paper. 1. Please review the descriptions in References. For example, von der Malsburg 1994 [6 in References for the main body] and von der Malsburg 1994 [11 in References for the main body] lack information about what kind of publications they are, and Engelcke et al [53 in References for the main body] was accepted at ICLR 2020. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The limitations are detailed in Appendix A.1, but the existence of this section is not indicated in the main body of the paper. It would be better if the authors mention the section at an appropriate place in the main body when they revise the paper. In addition to the limitations already stated, I think the current study is also limited in the following two points. 1. The objects used in the experiments are all 2D. (Experiments with images resulted from 3D objects would be preferable to be used in the future work.) 1. The background is void. (If the background also consists of objects, the task becomes more difficult. This situation would be also preferable to be tackled in the future work.) If these are correct, it would be nice to mention them as well. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: __A1. Major response__ to the concern about the difference with Zheng’s work. We will fist detail the six aspects in Section 5 and then point out more technical differences. First of all, the two papers are __answering different questions__ of different levels, not just advanced. Intuitively, the Zheng’s work focuses on how to __design__ a clustering program (like EM) while ours focuses on how to __learn__ the program from blank slate. Therefore, Zheng's focuses on dynamics only, but we take learning process into account. To some extent, from design to learning is an important step made by machine learning/deep learning, which is not obvious at all. It is especially the case here. While the architecture and final phenomenon is similar between the two works, which is actually desirable, the underlining mechanism is largely different, which is discussed in Section 4.2,4.3 in the main text (also Figure 11 in appendix). For example, _during training_, how blank slate DAE evolves so that: initial global random guess breaks the symmetry and tends to attend on a local spatial area, followed by further capturing the texture details. _The new insight_ we provide is that, the _symmetry-breaking behavior_ (grouping each single object into alternative synchronized assemblies, given multiple objects) can be learned with a succinct _symmetry-preserving_ loss function (reconstructing the whole multi-object scene). The Zheng’s work did not provide insight at this level. Additionally, evaluation of the __generalization__ is also not accounted for in Zheng’s work, since it does not have a complete learning scheme. Second, due to the difference of answered questions, the __interpretation of the biological factors__ (or network architecture) is different. In Zheng’s work, factors like delayed coupling and spiking dynamics function only as _shaping the network dynamics_ so that the convergent state is the synchrony. However, in our work, we provide additional new insights that the factors like narrow time window, refractoriness, etc, could even act as inductive bias to _bias the learning_ process (Figure2 in Appendix). The Zheng’s work does not provide insight about network architecture at this level. Third, due to different interpretation, the __underlining hypothesis about how brain could implement such a model__ is totally different. Zheng’s work must assume a _pre-wired_ cortical bottom-up/top-down pathway as a pre-trained single-object DAE, which is constructed during development, determined by gene or evolution. But how each potential single object be accounted for by pre-wiring? In contrast, our work relaxes this assumption and suggests that the brain could _gradually learns_ such a model in a way consistent with predictive coding during the life even after the development. Forth, to some extent, the DAE module (Section 3.3) in our work is not 'really' a DAE (it is not trained separately/explicitly to denoise), but a general encoder-decoder structure. The _denoising_ is achieved by the _whole system_ during iteration instead of the DAE module its own (Zheng's). We term it as DAE to provide a more intuitive picture of the architecture. However, __the nature of the architecture__ is different between Zheng’s and ours. We believe these aspects conceptually distinguish this work from Zheng’s, not just technically advanced. In addition to these conceptual-level novelty, there are also technical difference. Some are introduced in Section 3.4,3.5 in the main text, including the __novel loss function__ and dealing with the __gradient__ along delayed path and refractory period. Besides, one major difference on study is that we provide a __new quantitative evaluation scheme__ of the grouping representation (more efficient). In Zheng’s work, it uses K-means to cluster the spike trains (to compute the AMI), which is hard to explicitly distinguish time coding from rate coding (A.13.2 SI). In our work, we resolve this problem by explicitly taking timing code into account when clustering. That is why we introduce K-medoids and VP-metric in Section 3.6. SynScore also differs and is more rational. Therefore, the quantitative scores have different interpretations between the two works. This is the technical novelty of the evaluation. In sum, while the basic idea of combining spiking neural network and denoising autoencoder is shared between Zheng’s and ours. They differ in (1) motivation (representation/dynamics or learning/generalization) (2) nature/interpretation of the architecture (3) underlining mechanism/bio-picture (4) evaluation scheme. These stress the nolvety of this paper. __A2. For explainability__, actually, to analysis the grouping, the representation must be explainable. As a result, by clustering/visualizing the spiking pattern in Section 4, we make the representation in SCS explainable. __A3. For advices on revision__, including explainability/pointers/figures/code/reference/limitation, we will take these helpful advices into account when we revise the paper. __A4.__ $\tau_w$ is the internal model parameter while $\tau_1$ describe the externally observed synchrony (Figure2 in main text). They are relevant but not limited to be identical __A5. For future work on 3D and background__. Actually, this is one of our ongoing works (group in clever dataset, which is 3D objects with varied background). We find that the two limitation is firstly caused by the binary SCS in the pixel space because 3D objects should be grey-scale at least and general background is also difficult to be accounted for in binary image. As a result, we decide to implement SCS as a __binary hidden layer__ so that there are no constraints of the input type (binary/gray-scale/colored). The preliminary results of the ongoing work show that such change of network architecture preserve the basic mechanism and grouping/synchrony in hidden layer, still emerge. We will further study this situation as future work --- Rebuttal Comment 1.1: Comment: Thank you very much for the detailed answers. My questions and suggestions have been adequately responded. Assuming that the contents of the rebuttal will be appropriately reflected in the revised paper, I raised the score for Presentation from 2 to 3, that for Contribution from 2 to 3, and that for Rating from 6 to 7 (about the last one, considering the point that this work is related to machine learning, computer vision, and neuroscience). \# About **A4**, it is further beneficial if the equation (6) is visualized in a figure with the clarification of the meaning of the time window ($\tau_w$). --- Reply to Comment 1.1.1: Title: Response to the Reviewer fSiP Comment: We thank the reviewer for recognizing the contribution of the work and providing the helpful suggestions. We are aware that $\tau_w$ should be further explained in the main text. As a result, we slightly modified the eq (6) as: $s'_i(t) = CD(s_i(t),...,s_i(t-\tau_w+1))$ $ =1, \sum_{t'=0}^{\tau_{w-1}}{s_i(t-t')} >= 1$ $= 0, \sum_{t'=0}^{\tau_{w-1}}{s_i(t-t')} < 1$ where $\tau_w=2$ in the current model$. In this way, $\tau_w$ is explicitly included in the eq(6). and we clarify the meaning of $\tau_w$. We will also try to add $\tau_w$ in Fig2(b) in main text without inviting too much details.
Summary: This paper tackles the challenge of grouping information from individual visual elements into whole perceptual units, i.e., how compositional generalization is achieved, through proposing a GUST (Grouping Unsupervisely by Spike Timing network) that leverages spiking synchrony for grouping. This framework introduced a spike coding space (SCS) and a denoising autoencoder (DAE) to control for temporal grouping, enabled representing scenes of combinatorially different structures in simulated datasets. Strengths: 1. This paper combined insights from the neuroscience literature including correlation brain theory and temporal coding to enable grouping mechanisms in unsupervised learning. 2. The paper tackles a non-trivial challenge wrt designing a spike timing neural network properly, given that spiking is non-differential, requirement of high temporal precision, and interpretations of representations. Specifically, the proposed algorithm enables learning directly from multi-object inputs. It also enables a gradient-based training framework. The paper further leveraged non-linear metrics to evaluate grouping performance based on neuronal synchrony. Overall, the paper proposed elegant solutions for these challenges. 3. The paper examined the effect of biological constraints in their algorithm by providing an ablation experiment, and demonstrated superior grouping performance in novel scenes than baseline models RC-bind and Slot Attention. Weaknesses: 1. I'd love to see stronger evaluation benchmarking with more metrics, given that unsupervised clustering often gives different results based on how they are tuned and often is biased by noise. The authors leveraged the adjusted mutual information (AMI) to evaluate the grouping quality, and used Silhouette score (SC) to evaluate inner-cluster coherence level. For example, SC is generally higher for convex clusters, and it is known that different metrics sometimes choose different preferred method/algorithm in noisy clustering cases. Despite AMI adjusts for chance, AMI is high when there are pure clusters in the clustering solution (see [ref](https://jmlr.csail.mit.edu/papers/volume17/15-627/15-627)). It is unclear of the dataset clustering distribution to evaluate potential bias in their metrics. Specifically, I'd like to see more analysis on how their benchmarking performance varies across different random seeds, number of clusters, etc. Moreover, I'd recommend to add benchmarking results using ARI (adjusted rand index), to provide additional information. 2. The grouping evaluation focused on clustering different objects. This is very limited to persuade the readers on how their algorithm demonstrated superior compositional grouping. I'd recommend to add more evaluations on different aspects of compositions. For example, instead of classify different shape objects, is the proposed algorithm able to learn compositional structures in an object. For example, can different body parts of a complicated object being learned and segmented? 3. A major challenge of unsupervised learning on multi-object inputs is when objects are overlapped with different degrees, or collapsed with different noise levels. There is no analysis to quantify the distribution in datasets. Although image in Figure 4 indicates a partial overlap of triangle and square shapes, Figure 5 suggests those shapes are mostly clearly separated in space. It is thus unclear if the proposed algorithm can deal with grouping and unsupervised learning in these scenarios. 4. The whole idea of leveraging temporal coherence as a way for learning generalization and classification is not new, and has been explored in the literature. For example, this [paper](https://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1005137) shows that a spiking network driven by input timing dependent plasticity (ITDP) could perform visual classification task well and to generalize to unseen datasets. This [paper](https://arxiv.org/pdf/1807.10936.pdf) shows a hierarchical spiking architecture in which motion can selectively emerges in an unsupervised fashion. Additionally, how combinatorial coding can emerge in the cortex also has been discussed in [paper 1](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2693376/), [paper 2](https://www.princeton.edu/~wbialek/our_papers/osborne+al_08.pdf), and [paper 3](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4605134). This paper did not cite these relevant work, nor discussed how their proposed algorithm outperforms or links with these previously proposed combinatorial coding mechanisms. It is thus difficult to evaluate the novelty and whether this paper brings additional new insights in bio-inspired algorithms (see more comments in Limitations). Technical Quality: 3 good Clarity: 3 good Questions for Authors: See corresponding questions and suggestions in the Weaknesses section. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: This paper proposed a brain-inspired algorithm and assumed representations of combinatorially different structures happen within the same spiking neural network for classification. However, we already know this is not entirely true in the mammalian brain circuits. Often there is a clear hierarchical and modular structure, where low-level features were represented in early-stage cortical regions, which further enables compositional encoding. From this aspect, the paper seems limited in bringing more insights for biological interpretations. From the aspect of computational algorithms, the paper does not demonstrate superior performance compared to other SOTA unsupervised learning through deep neural networks. Therefore, this paper seems limited in terms of bringing higher impacts for both neuroscience and machine learning fields. I would suggest the authors to elaborate the limitations from biological interpretations, and the future directions of this work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: __A1 Benchmarking analysis and selective bias__: See __A1, A2 in global response__. __A2. Concerns on hierarchical case (brain organization / hierarchical grouping)__. First, we consider combinatorial generalization of _single-level grouping_ in this paper, where varying object number seems to be complete for composition, instead of _compositional grouping_. Second, the GUST architecture in this paper serves as the basic building block of more complex systems for more complex binding (bigger picture). Therefore, the GUST architecture is consistent with hierarchical and modular structure of mammalian brain and can be generalized to account for hierarchical grouping of ‘compositional structures’: GUST is each ‘cortical column’ to group features that belong to body / part of respective levels. Third, we are actually pursuing this idea as an ongoing work: hierarchical organization of multiple GUSTs for hierarchical grouping like the brain ('early stage for low-level features' as reviewer suggested), which we term as _representing the part-whole hierarchy of the visual scene_. Specifically, each GUST module serves as 'column' of each level, and different levels of GUSTs (characterized by their own time scale constants, fast for low-level GUST and slow for high-level GUST) are organized in a hierarchy (__Fig 6 a,b in global response__). The emerged synchrony (to represent part-whole) in turn is also of different time scales, fast one for part and slow one for whole, nested with each other like gamma-theta coupling. Preliminary results are shown in __Fig.6 c in global response__. However, as argued in A.1 (line108) in SI, this paper prefers to keep architecture minimal and general and leave ‘composition grouping’ (reviewer suggested) as future works. For other aspects of grouping / generalization results, see A.12 in SI __A3 Concerns on overlap__. The grouping is robust w.r.t. the overlap or noise since the attractor dynamics provides a completion process. For example, top-down feedback is a completion of the bottom-up noisy firing (Figure 11 in SI). See __A3 ~ A6 in global response for more__. __A4 Concerns on contribution to machine learning / neuroscience literature__. First of all, unsupervised grouping is a hard problem in deep learning / CV field (as reviewer fSip correctly points out, also stressed in ref[7] in main text) and it is also a fundamental operation for human brain, still lacking a clear mechanism. Therefore, combining the two sides on the one hand provides a general solution of basic computational problems in machine learning, and on the other hand, provides a systematic algorithmic level understanding of various biological structures/phenomenon/factors (like top-down feedback in cortex, delay-coupling, refractory, synchronized assembly, gamma oscillation, etc), bridging the levels (See A.5 in SI). As far as we know, both contributions bear the novelty. Secondly, for related work on bio-algorithms, the reviewer seems missed the core problem in this paper, grouping, which motivated the whole story. We do not exploit coherence for general learning / classification / generalization (as the IDTP paper and optical flow paper shows), but to solve the grouping problem in neural network, which is a problem at more basic level: representation (learning and generalization is relatively secondary). Therefore, they are totally different algorithms. Besides, aforementioned two papers use feedforward architecture without feedback and therefore coherence is not an emergent property through recurrent dynamics. Also, the dynamical nature of their ‘coherence’ is different from ours (though apply the same term). Our model has switching synchronized groups, which is non-trivial (need symmetry-breaking, __new insight__) and essential for grouping (__new insight__), while the two related papers only consider a single synchronized group for classification. Therefore, they differ from ours on at least (1) why coherence (2) what coherence (3) how coherence. Thirdly, for combinatorial code, actually it is an elegant abstraction of spike code beyond rate code. Here, we specifically focus on synchrony code itself. The presence of synchrony code in cortex and their contribution to the grouping function has been cited as [17~24,26-28] in main text. There is such a wide range of supports in neuroscience that we cited the most relevant ones. Besides, the 2 works by William Bialek are appealing theoretical assessment of information contained in the combinatorial code, with a _descriptive_ model to formulate the probability, instead of providing a _computational mechanism_ of how combinatorial code can emerge (as the reviewer improperly stated) and how they function to solve problems. Our work focuses on the latter two sides (__new insight__). Even so, we share a common spirit on additional information beyond rate code and they may potentially inspire our future works like binding by polychronization. Fourthly, the CTC theory (old or new) takes the existence of coherence for granted and directly focuses on its contribution to communication, instead of the other way around: communication also contributes to coherence. Therefore, it is not a closed framework. In our work, we provide a closed framework of (1) how coherence contributes to communication between DAE/SCS and (2) how communication between DAE/SCS also contributes to the emergence of coherence (Section 4.2 in main text, A.4.1,A.4.2 in SI). It is an iterative process instead of a one-way process in CTC (__new insight__). Lastly, we discussed limitation/future direction in A.1 of SI. As reviewer suggested, we will add more on biological limitations when we revise the paper, including the correlation-based plasticity like IDTP, more general combinatorial coding like polychronization, and hierarchical grouping in a hierarchical architecture. For concern of SOTA, 'slot attention' is one such recent SOTA on color image (See A.9.6 in SI).
Rebuttal 1: Rebuttal: We thank all the reviewers for spending their valuable time to carefully read the paper and writing reviews. The comments and suggestions are very valuable and insightful, which helps to revise the paper and motivate future directions. We take all suggestions carefully when we revise the paper. Specific responses to each reviewer are provided in separate rebuttal panel and in this global response, we address several concerns that requires new figures or related to concerns of multiple reviewers. ___Response to questions on evaluation/benchmarking___ __A.1 (from reviewer dadF) Rationality of AMI and SC for evaluation, based on data distribution__. On the one hand, _AMI / ARI are common-practice evaluation metrics to evaluate grouping_ (ref [14, 38, 48, 50, 51, 52, 53] in main text, discussed in A.13.3 in SI). On the other hand, we include background during both K- Medoids clustering and AMI evaluation (K=object number +1) since removing background introduces explicit prior of foreground-background segmentation (eg. what is object), which should also be considered for general grouping ability. Actually, during experiment, we found that confusing near-boundary region of Shapes with background is one reason that challenges the evaluation scores. In this case, the cluster size is relatively unbalanced because background tends to be the bigger cluster. Since AMI is more suitable for unbalanced situation, AMI is a more suitable metric than ARI, which fits more for balanced case (eg. if background is explicitly removed for evaluation). Also see __Fig 2__ right in pdf. For SC, we visualize the low-dimensional structure of clusters through PCA in the metric space of spike trains induced by VP-metric (__Fig 5__ in pdf). It is shown that in both non-overlap and overlap cases, clusters are well-centered and seperated ('convex' in metric space), so that SC is suitable to evaluate the coherence level. __A.2 (from reviewer dadF) More benchmarking results w.r.t random seed, number of clusters, AMI vs ARI__. First, we have used 5 random seeds to evaluate the averaged AMI score in figure 6 in main text and the error bar is neglectable (Table3 in Appendix). Thus, the averaged AMI within large dataset is not influenced by uncertainty of AMI on each single sample (shown in __Fig 2__ in pdf), and there is no bias. Second, the benchmarking in figure 6 in main text (of different object number) already indicate different number of clusters (K=object number+1). As far as we understand, it is exactly what reviewer asked for. __Fig 4__ in pdf is a more detailed benchmarking with varied object / cluster number. Third, as explained in A1, AMI is a more suitable metric than ARI in our case. We also provide benchmarking on ARI (__Fig 4__ in pdf) for supplement. Lastly, we provides additional benchmarking results w.r.t different overlapping degree in __Fig 4__ in pdf (both AMI and ARI). __A.3 (from reviewer 9PtM). Compare the sensitivity among models (Performance vs Overlap ratio)__. Please see __Fig 4__ in pdf (upper), where comparison is made for '3 train objects and 3 test objects' case (major case). Due to page limit, it is implausible to provide all 9 combinations (2/3/4 train objects X 2/3/4 test object) as Figure 6 in main text. ___Response to questions on Overlap / Noise___ __A.4 (from reviewer dadF) Analysis of the distribution of overlap in dataset__. Figure 5 in main text do not suggest shapes are mostly clearly separated (the reviewer argued), because this image is just one randomly selected example. Besides, the example in Figure 4 in main text, in contrast, has partial overlap, which is much more common (__Fig1__ in pdf). We visualize the distribution of overlap in __Fig 1__ in pdf. __A.5 (from reviewer dadF) Whether GUST can deal with overlaps__. Figure 4 is one example, which suggests that GUST can deal with partial overlap through its denoising mechanism. Actually, in Shapes dataset, overlap is quite common (__Fig 1__ in pdf). We provide systematic quantitative analysis of overlap effect in __Fig 4__ in pdf. __A.6 (from reviewer dadF) Whether GUST can deal with noise__. The DAE in GUST can denoise the stochastic firing pattern in SCS through its feedback. However, input noise will linearly affect the spike firing, since in current implementation, input _hard gates_ the spiking neurons. If grouping is decoded from SCS ($s(t)$), the grouping will be linearly affected, but if grouping is decoded from feedback ($\gamma(t)$), it is robust to noise. It is plausible that adding horizontal spatial connection in SCS (future work) will provide pattern completion in SCS based on proximity law (Gestalt principle), when given noisy input. ___Preliminary results on hierarchical grouping___ __A7. (reviewer dadF) How GUST can be extended to hierarchical grouping__. Schematic architecture and preliminary result of hierarchical grouping (part-whole hierarchy) is shown in __Fig 6__ in pdf. __A8. (reviewer 9PtM) How uncertainty encoded in neuronal coherence can be used (for hierarchical grouping)__. Here we provide an example of one ongoing work (part-whole hierarchy). The overall architecture is shown __Fig6__ a,b in pdf. To represent the part-whole hierarchy of a visual scene, it is efficient to seach for solutions in a divide-and-conquer way: whole first and then part (possible brain strategy). Beginning with high temporature, high-level grouping achieves the high-level coherence (slow oscillation, coarser spatiotemporal structure) while low-level still remains unstructured/ random; __this high-level coherence (slow oscillation) indicates high certainty in whole-level grouping and is readout by inhibitory neurons to lower the temporature (decrease the excitability) so that low-level grouping is further achieved__ (faster oscillation, finer spatiotemporal structure), _conditioned by high-level grouping_. Overal, coherence acts as __internal indicator__ of a good solution with high confidence. Pdf: /pdf/18e50e0baeb782a0b5b584833b322501ea73fdf3.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Training neural operators to preserve invariant measures of chaotic attractors
Accept (poster)
Summary: This paper mentions that MSE loss is insufficient for learning the solution operator for chaotic dynamics. Two approaches are proposed to solve the problem. One is to use sinkhorn loss, and the other is to use contrastive learning. The authors evaluate the effectiveness of the proposed methods through experiments on several physical systems. Strengths: - The problem of learning solution operators for chaotic dynamics is a challenging and interesting problem, and the authors have an interesting point of view (preserving invariant measures). - To my knowledge, the introduction of physics-informed optimal transport and contrastive learning is a new approach. Weaknesses: - Although the problem setting is important and interesting, the proposed method seems to be a straightforward combination of existing technologies. - There is room for improvement in the writing of the paper. - There is no reference to previous studies on autoregressive approach. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - I understood that the prediction in equation (3) is the same as the autoregressive PDE solver proposed in [R1]. It would be good to add such previous studies. - Since the autoregressive approach performs forecasting iteratively, it seems to me that the numerical error increases in the long-term forecasting setting. Could you add a discussion on this point? Also, is it possible to forecast at arbitrary time intervals $\Delta t$ during training and testing? - I think it would be easier to read if the description of the previous studies (optimal transport and contrastive learning) and the proposed method are separated. In section 3, I feel that the novelty of this paper is not described clearly. - There are other ways (e.g., Maximum Mean Discrepancy) to measure the distance of a distribution besides optimal transport. It would be good to clarify the motivation for using optimal transport, and the same goes for contrastive learning. [R1] Johannes Brandstetter, Daniel E. Worrall, Max Welling, Message Passing Neural PDE Solvers, ICLR, 2022. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: It would be good to discuss in detail the advantages and disadvantages of using Optimal transport and contrastive learning. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your feedback! **W1**: > Although the problem setting is important and interesting, the proposed method seems to be a straightforward combination… Our proposed approaches are not a simple reshuffling of methods to solve a variant of the originally proposed problem (e.g. general representation learning) but rather a completely new use of contrastive learning (CL) and optimal transport (OT) to address a very different problem: training emulators of chaotic systems with good long-term behavior. Our contributions include both framing the problem and then developing new and adapting existing methods to handle this new problem domain. Our application of CL and OT is deeply motivated by our problem formulation, without which it would be impossible to justify their use in this context. Furthermore, adapting CL for this application is highly non-trivial since it is well outside of its traditional use as a general representation learning method. In the paper, we argue that CL has the right properties to learn exactly the kind of time-invariant statistics that we aim to emulate. We then show that the learned latent space can be adapted into a loss function to train emulators that successfully capture long-term behavior. The final paper will emphasize these points. **W2**: > There is room for improvement in the writing of the paper. Please let us know if there are any specific writing issues or sections in need of clarification. **W3/Q1**: > There is no reference to previous studies on autoregressive approach. > I understood that the prediction in equation (3) is the same as the autoregressive PDE solver… [R1]. We would be happy to include additional references on emulators. Our work focuses on developing regularizers for training emulators on chaotic systems and not on autoregressive training, and we do not claim autoregressive training to be a novel contribution here. **Q2**: > Since the autoregressive approach performs forecasting iteratively, it seems to me that the numerical error increases in the long-term forecasting setting. The key is that, for chaotic dynamics, it does not matter what kind of emulator you choose (autoregressive or not), you will never be able to exactly predict the state of the system in the long term. That is, for any emulator, the mean squared error for long-term predictions will exponentially increase with time. That is why we focus on training emulators that capture long-term statistical behavior, which is the relevant measure of long-term forecasting performance for chaotic systems. > Also, is it possible to forecast at arbitrary time intervals… Our emulator does perform 1-step-ahead forecasting for a specific $\Delta_t$ which can be applied iteratively, but forecast accuracy at a specific time point is not the focus of our work (and indeed an impossible task for chaotic systems). A good analog of our goal would be a climate model which is unable to make an exact prediction for an arbitrary time point but does provide good statistical insights into long-term trends such as the frequency and intensity of hurricanes. **Q3**: > I think it would be easier to read if… the previous studies… and the proposed method are separated. In the paper, previous studies are presented in the related work (Section 1.2) and the two proposed approaches are presented in separate sections: physics-informed OT (Section 3.1) and unsupervised CL (Section 3.2). > In section 3, I feel that the novelty of this paper is not described clearly. Section 3 presents our approaches to solving the problem formulated in Section 2. The novelty of the work is discussed in our contributions statement (Section 1.1). To reiterate, we show that the standard method for training emulators is insufficient for modeling chaotic dynamics and instead propose to train the emulator to preserve the invariant measure of the chaotic attractor. With this new problem formulation, we propose two new approaches for training the emulator to match the attractor statistics. **Q4**: > There are other ways (e.g., MMD) to measure the distance of a distribution… It would be good to clarify the motivation… There are many ways of measuring differences between distributions, including point-wise divergences such as KL, moment-based distances such as MMD, as well as the Sinkhorn distance from optimal transport—which we use in the paper. Using MMD would result in a similar method to our OT approach and act as a drop-in replacement for the Sinkhorn distance for matching the distributions of summary statistics. In fact, we have included a new experiment where we train an emulator using a Gaussian-kernel MMD loss (Table R4). We find that MMD, even after careful hyperparameter tuning, does not perform as well as the Sinkhorn loss. We chose the Sinkhorn loss because it has many well-known advantages over point-wise divergences and kernel MMD distances: e.g., for two distributions that do not have overlapping support, the Wasserstein distance underlying the Sinkhorn loss still provides a high-quality gradient signal that makes it useful for optimization. MMD also requires many hyperparameter choices, including the choice of kernel and kernel-specific parameters, that have a significant effect on the properties of the final distance measure. Contrastive learning, on the other hand, allows us to learn a set of invariant statistics from the data. This provides a method that does not rely on prior domain knowledge about the chaotic attractor and instead uses the multi-environment setting to automatically learn informative statistics. This motivation is further discussed in the introduction (Sec. 1), when we propose the method (Sec. 3.2), and in the discussion (Sec. 5). [R1] Johannes Brandstetter, Daniel E. Worrall, Max Welling, Message Passing Neural PDE Solvers, ICLR, 2022. **L1**: > It would be good to discuss in detail the advantages and disadvantages… Please see the discussion in the general rebuttal. --- Rebuttal Comment 1.1: Title: Reply Comment: Thank you for your response. I now have a better understanding of this paper, and the experimental results using MMD are also useful. I agree that the idea of using OT and CL to predict chaos dynamics is new in itself. I will raise my score by 1. --- Reply to Comment 1.1.1: Title: Thank you for your reply! Comment: Dear reviewer, we are sincerely thankful for your constructive feedback and appreciation of our work! We will ensure to include the experimental results using MMD, and discussions of our OT and CL approaches in the final paper.
Summary: This paper proposes two methods to preserve invariant measures of chaotic systems in the multi-environment setting when training neural operators. Given some expert knowledge of the underlying dynamical systems, they propose a new optimal transport loss, which uses this knowledge to match the statistics. Without expert knowledge, they use a contrastive learning approach to learn invariant statistics, which are used to construct the feature loss to preserve these invariant statistics. These two methods show better performance to reproduce the invariant statistics of noisy chaotic systems. Strengths: 1. Long-time prediction of chaotic systems is different, this paper proposed two methods to better represent chaotic systems using neural operators by preserving the invariant statistics of chaotic attractors. 2. The methods are robust to noise and chaos. Weaknesses: 1. These two methods are aimed at different problem settings: the optimal transport-based approach needs prior knowledge, while the contrastive learning approach does not; the optimal transport-based approach seems not to depend on the multi-environment setting, whereas the contrastive learning approach does depend on this setting. These two methods are relatively separate, and the relationship between them is not close. 2. The two methods are mixed together, so the structure of the paper does not look clear. 3. It seems that the optimal transport-based approach requires too many equation constraints. For example, in the two examples given in the paper, almost all of the terms in the underlying dynamical equations are used. 4. As for the contrastive learning approach, higher requirements are put forward for the diversity of data. 5. The introduction of the Noise Contrastive Estimation (InfoNCE) loss is not described clearly in the main text. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: 1. Is it possible to provide the problem setting, limitations, and advantages of the two methods more clearly, and clearly explain the difference between the two methods, instead of introducing them together? 2. Whether the performance can be maintained if more limited prior knowledge is utilized for the optimal transport-based approach? 3. If all those equation constraints have to be used to ensure performance, should it be compared with methods such as physical informed neural operator that utilize similar information instead of the conventional neural operator? 4. As for the contrastive learning approach, what kind of diversity does the data need to meet to ensure better performance? Could you provide an intuitive explanation? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 2 fair Contribution: 3 good Limitations: The authors have adequately addressed the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your feedback! **W1**: > These two methods are… relatively separate… While the two proposed methods use different machine learning tools, they are very much related by a shared goal and overall approach: to train emulators to capture long-term chaotic behavior by preserving the invariant measure of the attractor. In fact, both methods go about this goal in conceptually similar ways, by matching the statistics learned by the emulator to statistics from the data. The primary difference is how those statistics are identified, whether it is from prior domain knowledge or learned directly from the data. Our contributions consist of both identifying and formulating the shared problem (Section 2) and then proposing two approaches for solving this problem (Sections 3.1 and 3.2). The fact that contrastive learning (CL) can preserve chaotic attractors nearly as well as methods that use prior knowledge is remarkable. The multi-environment setting provides an avenue for identifying the relevant statistics for characterizing chaotic attractors. The CL approach uses multiple environments explicitly during pre-training, while the OT approach uses this information implicitly as a part of the prior knowledge used for choosing informative summary statistics. In other words, the best summary statistics for training an emulator are precisely those that can tell the difference between environments/attractors and therefore provide a signal to push the emulator to match the correct attractor. The final paper will better emphasize these points. **W2**: > The two methods are mixed together… The two methods are presented in separate sections: physics-informed OT (Section 3.1) and unsupervised CL (Section 3.2). Both represent approaches for training an emulator to preserve the invariant measure of the attractor—the problem formulated in Section 2. **W3**: > It seems that the optimal transport-based approach requires too many equation constraints… We agree! This was the motivation for developing the CL approach. The OT approach is precisely designed to use existing domain knowledge to choose informative summary statistics and train a better emulator. That said, we are interested in better understanding how the quality and quantity of the chosen statistics influence the trained emulator. To better study this effect, we have performed additional experiments (Table R2 in the rebuttal PDF) with a reduced set of summary statistics including using a minimally informative statistic to demonstrate when the OT approach begins to fail due to a poor choice of statistic. **W4**: > As for the contrastive learning approach, higher requirements… for the diversity of data. The multi-environment setting is a very natural setting for many scientific and engineering applications, where different measured trajectories often have varying parameters due to environmental noise or varying control inputs. It also presents a more challenging generalization problem than the single-environment setting. As with any unsupervised representation learning method, CL requires sufficient data diversity, which in this case comes directly from the multi-environment setting. Please also see our response below to Q4. **W5**: > The introduction of the Noise Contrastive Estimation (InfoNCE) loss is not described clearly in the main text. We introduce InfoNCE in Section 3.2 eq. 15, cite its origin as a contrastive learning loss, and explain the intuition for its use and its relevance to our problem. **Q1**: > Is it possible to provide the problem setting, limitations, and advantages… The two methods share a problem setting as presented in Section 2. Please see the general rebuttal for additional discussion comparing the approaches. **Q2** > Whether the performance can be maintained if more limited prior knowledge… In our updated results (Table R2), we show the effect of using a smaller set of summary statistics as well as using a minimally informative statistic for the OT approach. You can often get away with using a fairly limited set of summary statistics if the chosen statistics are highly informative. If there is a real lack of prior knowledge, then our CL approach offers a very compelling alternative. **Q3** > If all those equation constraints have to be used to ensure performance, should it be compared with methods such as physical informed neural operator… We believe the reviewer means statistics instead of constraints since we impose no constraints. The OT approach matches summary statistics of the model outputs to those of the data and never treats those statistics as terms in a governing equation. Another important distinction to make here is that methods such as the physics-informed neural operator assume a *known* PDE and then use a neural operator to fit the solution, which can be done even without training data. We are using the neural operator to learn a time evolution operator directly from data without knowing the PDE. (We agree that assuming we know what statistics to preserve with the OT method can be problematic, so we developed the CL method to address this.) **Q4**: > As for the contrastive learning approach, what kind of diversity does the data need to meet to ensure better performance?... Intuitively, CL uses the data diversity provided by the multi-environment setting to choose invariant statistics by identifying which statistics are informative for distinguishing between the various attractors seen in the data. Empirically, our experiments show that randomly varying just one or a few system parameters is enough to provide the necessary data diversity for CL. In our updated results (Table R3), we show a new experiment that further reduces the range of the only varying parameter F in the Lorenz-96 dataset from [10, 18] to [16, 18]. Even with such minimal diversity, we still see good performance using our CL approach. --- Rebuttal Comment 1.1: Comment: Thank you for your detailed replies, which cleared up most of my confusion. The added experiments explain the performance of the model in the statistically limited case, making the discussion more systematic. Based on this, I increased the score accordingly. --- Reply to Comment 1.1.1: Title: Thank you for your reply! Comment: Thank you for your constructive feedback of our paper and appreciation of our work! We will make sure to include the performance of the OT using limited statistics and other discussed clarifications, in the revision of our paper.
Summary: The authors proposed a training framework to preserve preserve invariant measures of chaotic attractors. First, they identify training standard neural operators using MSE on chaotic dynamics does not work. Then they suggested to train neural operators to preserve the invariant measures of chaotic dynamics and the time-invariant statistics. Importantly, they proposed two approaches to train neural operators to preserve those statistics. One is the optimal transport-based approach, and the other is the contrastive learning approach. They empirically showed that both approaches capture the true invariant statistics. Strengths: The paper is well written and organized. The paper follows a well-defined structure, with a logical flow of ideas throughout. Weaknesses: The experiment part could benefit from more substantial content. Comparison with previous methods should be addressed in the experiment part. See below for the two questions as well. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: What is the advantage of preserving the invariant features in chaotic dynamics? Does it really contribute to long-term forcasting, which suppose to be the objective? If it does, I expect the author to show an improvement in long-term forcasting quatitatively by adopting one of the two proposed approches. The proposed method is evaluated on the synthetic data, the Lorenz-96 and the Kuramoto–Sivashinsky chaotic dynamics. I am wondering how the model performs on some real world data, e..g, the climate system. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: The authors have addressed the limitations Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your feedback! **W1**: > The experiment part could benefit from more substantial content. Comparison with previous methods should be addressed in the experiment part. See below for the two questions as well. Prior methods for training emulators generally use the standard MSE loss which we show as the baseline in our results. We also include a comparison with a previous approach based on the Sobolev norm (Table 2). Our approaches outperform the Sobolev norm method in our noisy chaotic settings. **Q1.1**: > What is the advantage of preserving the invariant features in chaotic dynamics? Preserving invariant features of chaotic dynamics means matching the long-term statistical behavior of a chaotic dynamical system. For example, weather models may only be able to predict the exact weather up to two weeks ahead of time. In fact, it is impossible to predict the precise state of the Earth on the time scales (years) required for climate modeling [1], but we would still like to predict important statistical features of the climate, e.g. the average number of hurricanes per year. It is these kinds of statistical features of the chaotic dynamics that we want to preserve for long-term forecasting. **Q1.2**: > Does it really contribute to long-term forecasting, which suppose to be the objective? Yes! Any long-term prediction for the exact state of the system diverges exponentially from the ground truth due to chaos, so the important quantities to consider for long-term forecasting are precisely the statistical features of the dynamics. By matching these statistical features, we are able to train an emulator that truly captures the chaotic dynamics and correctly models the only predictable aspects of the chaotic system. Another important point here is that short-term prediction performance does not necessarily imply long-term performance in terms of statistics and model stability. An accurate weather model run for a longer time will not necessarily produce high-quality climate statistics, just as an emulator trained only on short-term MSE may fail to capture the true long-term behavior of the system. The goal of our work is to help rectify this problem by proposing methods that directly tackle long-term forecasting for chaotic systems. **Q1.3**: > If it does, I expect the author to show an improvement in long-term forecasting quantitatively by adopting one of the two proposed approaches. Our results *do* show a significant improvement (which is often even visually evident in the sample predictions) over the standard MSE-based training method on the relevant long-term statistical behavior. In particular, our metrics include the distributions of key summary statistics and the energy spectrum, both of which represent long-term statistical characterizations of the chaotic system. Our trained deterministic emulators provide much higher quality instances of the system dynamics, which sample the chaotic attractor, and this is the best you can hope for when performing long-term forecasts of chaotic dynamics. The new experiments (see general rebuttal) provide additional evidence for this using new metrics. The final paper will better emphasize these points. **Q2**: > The proposed method is evaluated on the synthetic data, the Lorenz-96 and the Kuramoto–Sivashinsky chaotic dynamics. I am wondering how the model performs on some real world data, e..g, the climate system. We are also excited to see how our approaches would scale for climate modeling. However, this is outside the scope of this work. Existing climate emulators, such as FourCastNet [2] and ClimaX [3], are large models that cost millions to train, tune, and validate. [1] What might we learn from climate forecasts? Smith, Leonard A. (2002). [2] FourCastNet: A Global Data-driven High-resolution Weather Model using Adaptive Fourier Neural Operators. Pathak, Jaideep et al. (2022). [3] ClimaX: A foundation model for weather and climate. Nguyen, Tung et al. (2023). --- Rebuttal Comment 1.1: Title: Thank you for answering my questions Comment: Re Q1.1: I think there is still a large gap between 'the average number of hurricanes per year' and predicting statistical features of the chaotic dynamics...To me, this model still cannot tell me the exact long-term prediction of chaotic dynamics. Some of the statistical features can be visualized by for example, PCA, t-SNE based on history data. Why do we need to use the model to tell us these low-dimensional invariant featuers? Re Q1.2: Can the authors give an example of the invariant feature of a REAL chaotic system? The Lorenz-96 is a synthetic system with a known low-dimensional feature. If the invariant feature of a real chaotic system is too complex, or too hard to define, how to verify the model can be applied to a real system? Generally I kind of agree with Reviewer GF1X that '...to see how the methods perform on at least one empirical chaotic problem would have also been nice...' --- Reply to Comment 1.1.1: Title: Response to your questions (1/2) Comment: Dear reviewer, we greatly appreciate your prompt feedback. Please allow us to address the concerns you've highlighted. > Re Q1.1: I think there is still a large gap between 'the average number of hurricanes per year' and predicting statistical features of the chaotic dynamics… The “average number of hurricanes” is an example of a statistical feature of climate models. Of course, there are many other important statistical features that will be relevant for building a high-quality climate emulator. Our goal is to use these statistical features to help train better deterministic emulators that will reproduce the true underlying chaotic attractor. Moreover, our CL approach is precisely focused on automatically identifying a rich collection of informative statistical features without requiring prior knowledge. Our results show that training an emulator using the invariant statistics learned from CL significantly improves the ability of the emulator to preserve many physics-informed statistical metrics we use for evaluation. > To me, this model still cannot tell me the exact long-term prediction of chaotic dynamics. No model can even *in theory* tell us the exact long-term prediction of chaotic dynamics. This is because chaotic systems exhibit extreme sensitivity to initial conditions, i.e., any small change or noise in the initial condition results in an exponentially divergent solution, even if we use a mathematically exact model. That is why we focus on statistical features that characterize chaotic dynamics. During the evaluation, we use statistics-based metrics that characterize the chaotic attractor of our simulated systems and show that these indeed match the data well when performing long-term predictions: 1,000 time steps in the Lorenz 96 system and 500 time steps in the Kuramoto–Sivashinsky system. Moreover, lots of studies related to climate science have shown that preserving the statistical features of chaotic systems is very important in predicting and understanding the climate. Fundamental studies ([1, 2]) show that the dimension of attractors characterizes the minimum number of variables to describe the system. [3, 4] also demonstrate that preserving the invariant attractors is helpful for determining extreme events. A recent study ([4]) also identified rMSE as a problematic evaluation metric for climate prediction as it implicitly assumes two time series should be aligned for every individual day, which is nearly impossible for real data. Instead, they use an invariant summary statistic for evaluating climate models. > Some of the statistical features can be visualized by for example, PCA, t-SNE based on history data. Why do we need to use the model to tell us these low-dimensional invariant featuers? Yes, PCA, or t-SNE can be used to visualize the features. However, these features do not come for free, and our goal is not visualization. While PCA or t-SNE can visualize known features, these methods do not allow us to discover new invariant statistical features. We also do not specifically target low-dimensional or visualizable features. In fact, our CL approach allows us to learn general high-dimensional invariant statistics without prior knowledge, which can then be used for training emulators.
Summary: The paper uses neural operators to track the invariant statistics of chaotic systems. It novelly proposes to use the optimal transport loss and contractive learning to match the distribution, so that the learned model correctly track the attractor among the chaotic behavior. The paper uses FNO as a backbone to study the Lorenz and KS equation. Experiments show the proposed methods better track the behaviors such as histogram and spectrum. Strengths: The paper proposes to learn the dynamics of chaotic system with the optimal transport loss and contractive learning. The proposed methods are more stable compared to the standard MSE/L2 loss, and they can capture the attractor of the system. Experiments show these methods better track the statistical behaviors such as histogram and spectrum. Weaknesses: The paper only studied Lorenz and 1d Kuramoto–Sivashinsky equation. It will be very interesting to study how these methods scale to 2d Navier-Stokes (Kolmogorov Flow) problem. It will be interesting to discuss time complexity. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: I have a few questions: - Which method (Sinkhore algorithm and contrastive learning) does better? What are their comparative advantages and disadvantages? - How does the training time of the Sinkhore algorithm and contrastive learning? I assume they will be slower than the standard supervised learning. It will be great to report the runtime and training time. Would the wasserstein loss be easier to optimize compared to L2 loss? It will be interesting to show a training curve. - How does the proposed methods compared to generative model [1, 2]. For example, in Generative adversarial neural operators they also matches the wasserstein distance. [1] Rahman, Md Ashiqur, et al. "Generative adversarial neural operators." arXiv preprint arXiv:2205.03017 (2022). [2] Lim, Jae Hyun, et al. "Score-based diffusion models in function space." arXiv preprint arXiv:2302.07400 (2023). Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: The paper addressed the limitation. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your feedback! **W1**: > It will be very interesting to… scale to 2d Navier-Stokes… We agree that scaling to 2D Navier–Stokes and even higher-dimensional chaotic problems would be an interesting extension to this work. We note, however, that our current results on 1D spatiotemporal chaos already provide high-dimensional chaotic attractors that allow us to validate our approaches and show the benefits of using chosen or learned statistics to train modern deep learning-based emulators. There is no fundamental impediment to extending our method to this setting; it simply requires using a very different backbone in our architecture and more training time than the rebuttal period allows. **W2**: > It will be interesting to discuss time complexity. In the rebuttal PDF Figure R1, we document the training time of the models. The OT approach relies on the Sinkhorn algorithm which scales as $O(n^2\log(n))$ for comparing two distributions of $n$ points each (Theorem 2 in [3]). In our experiments, we use $n = 6000$ to $n = 25,600$ points with no issues, so this approach scales relatively well. The CL approach requires pretraining but is even faster during emulator training since it uses a fixed, pre-trained embedding network. We will gladly include a more detailed discussion of time complexity in our final paper. **Q1**: > Which method (Sinkhore algorithm and contrastive learning) does better? What are their comparative advantages and disadvantages? Please see the general rebuttal for a detailed discussion of this topic. We will make sure to include this discussion in our final paper. **Q2**: > How does the training time of the Sinkhore algorithm and contrastive learning? I assume they will be slower than the standard supervised learning. It will be great to report the runtime and training time. We show training times in the rebuttal PDF Figure R1. CL is faster than Sinkhorn (OT) during emulator training because CL uses a pre-trained embedding function. However, CL does require a pre-training step. > Would the wasserstein loss be easier to optimize compared to L2 loss? It will be interesting to show a training curve. The Sinkhorn loss is a computationally-efficient proxy for the Wasserstein distance. While the Sinkhorn loss is still slower to compute than the L2 loss (rMSE), it appears to converge faster. We include sample training curves in the rebuttal PDF showing that the Sinkhorn loss converges in fewer iterations than rMSE. **Q3**: > How does the proposed methods compared to generative model [1, 2]. For example, in Generative adversarial neural operators they also matches the wasserstein distance. Both cited works are generative models, as you point out. However, our problem is not a generative modeling problem. We are not trying to directly sample from the attractor distribution but instead find and use high quality attractor statistics that allow us to train a better emulator for the dynamical system. As the emulator evolves over time, it may be effectively sampling from the attractor distribution, but the sampling itself is not the end goal. Replacing the emulator with a general-purpose generative model does not accomplish our goal of training a model that accurately captures the time dynamics of the system. To address the use of the Wasserstein distance in [1], the generative adversarial neural operator (GANO) uses the Wasserstein distance in the same way as the original Wasserstein GAN uses the Wasserstein distance—as a motivation for the adversarial framework for generative modeling. As noted in [1], “while the cost functional… is well defined, showing that the learned measure is indeed an approximation of [the true distribution]... remains an open problem. We address this issue empirically and perform a set of experiments that demonstrate that GANO produces diverse outputs from the data probability measure.” In other words, it is not clear whether GANs really do compute a Wasserstein approximation and we should really treat the connection to OT as general motivation. In our work, we use the Sinkhorn algorithm to solve an entropy-regularized OT problem, which is a very well-understood approximation theoretically, and then use the Sinkhorn-approximated Wasserstein distance to train an emulator for a dynamical system rather than perform generative modeling. [1] Rahman, Md Ashiqur, et al. "Generative adversarial neural operators." (2022). [2] Lim, Jae Hyun, et al. "Score-based diffusion models in function space." arXiv preprint (2023). [3] Dvurechensky, Pavel, et al. “Computational Optimal Transport: Complexity by Accelerated Gradient Descent Is Better Than by Sinkhorn’s Algorithm.” (2018).
Rebuttal 1: Rebuttal: We would like to first thank the reviewers and ACs for their helpful comments and questions. We are happy to see that the reviewers generally appreciate the importance of the problem of training better emulators for chaotic dynamics and find the proposed methods and ideas well-motivated. In our response, we provide new experiments, evaluation metrics, and additional discussion to help clarify the properties of our two proposed approaches. # Comparison of optimal transport and contrastive learning approaches Several reviewers asked for a more detailed comparison between the two proposed approaches: physics-informed optimal transport (OT) and contrastive learning (CL). **Ultimately, both methods have strong performance according to a variety of metrics, but the CL approach requires no prior physical knowledge and is faster. Both approaches have significant advantages over the classical approach of minimizing rMSE, which fails to preserve important statistical characteristics of the system.** We present both approaches here because they are both methods for encouraging the emulator to capture the long-term statistical behavior of the chaotic system. In fact, both approaches work by matching statistics computed from the emulator to statistics from the data. The primary conceptual difference between them is that the optimal transport approach takes advantage of prior domain knowledge about the setting to choose informative summary statistics, while the contrastive learning approach learns informative invariant statistics from the data alone. ## Physics-informed OT ### Advantages: * If prior domain knowledge is available, OT can make good use of it and often performs better than CL when given highly informative summary statistics. * OT does not require pre-training to learn a distance measure. ### Disadvantages: * Without domain knowledge, an arbitrary uninformative summary statistic will often have no performance benefit. * Because OT requires estimating the Wasserstein distance ($O(n^2\log(n))$ operations) for each training step, it is slower than the CL approach during emulator training. However, the scaling behavior is still reasonable, and the computation can be made even faster via sub-sampling. ## Unsupervised CL ### Advantages: * CL does not require any prior domain knowledge and instead learns informative statistics directly from the data. * CL is faster than OT during emulator training since it uses a pre-trained encoder for invariant statistics rather than computing distributional distances. ### Disadvantages: * As an unsupervised representation learning method, CL requires a pre-training step to learn the informative statistics, which are later used for emulator training. *However, this fixed one-time cost is easily amortized.* * In principle, CL relies on the data diversity of the multi-environment setting, although our new experiments show that we can still obtain good results even with a very minimal diversity of environments. *Also, the multi-environment setting is common in many scientific and engineering applications due to environmental noise or tunable control parameters, so this data diversity is often “free”.* We have an existing discussion comparing the two approaches in the submitted supplement, and we will include a more detailed discussion in the final paper. # New experiments and evaluation metrics We have performed several new experiments that act as additional points of comparison and help us better understand the behavior of our methods under a variety of conditions: 1. For our OT approach (which uses the Sinkhorn loss), we test a reduced set of summary statistics, which shows how the quality of the summary statistic affects the performance of the method (Table R2). With an informative summary statistic, we find even a reduced set can still be helpful but, for a non-informative statistic, the OT method fails as expected. 2. For our CL approach, we test a multi-environment setting with reduced data diversity and find that the contrastive method still performs well under the reduced conditions (Table R3), which demonstrates robustness. 3. We also implement a variant of our OT approach that uses Maximum Mean Discrepancy (MMD) as a distributional distance rather than the Wasserstein distance. Using the same set of summary statistics, we find that MMD does not perform as well as the Wasserstein distance for training emulators (Table R4). As suggested by the reviewers, we also add new evaluation metrics (Table R1) in the form of: 1. the leading Lyapunov exponent (LLE)—a dynamical quantity that measures how quickly the chaotic system becomes unpredictable; and 2. the fractal dimension—a characterization of the dimension of the attractor. For the LLE, we report the relative absolute error $|\hat\lambda - \lambda|/|\lambda|$ between the model and the ground truth averaged over the test set. For the fractal dimension, we report the absolute error $|\hat D - D|$ between the estimated fractal dimension of the model and the ground truth averaged over the test set. We generally find in high-noise settings that both of our approaches give lower LLE errors than an emulator trained on rMSE loss alone, and the contrastive approach has the lowest LLE error. This is despite the fact that our method is only encouraged to match invariant statistics of the final attractor rather than dynamical quantities. We also see evidence that the fractal dimension of the CL approach is closer to the true attractor. However, we note that the fractal dimension is difficult to reliably estimate for high-dimensional chaotic attractors [1]. The new experiments provide useful insights for future applications of our approaches, and the new evaluation metrics shed additional light on the features of chaos preserved by our trained emulators. [1] Impracticality of a box-counting algorithm for calculating the dimensionality of strange attractors. Greenside, H. S. et al, Phys. Rev. A (1982). Pdf: /pdf/caf74bcc6f65b043a0cd105043cabe9ae9ecb01a.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: This work considers the problem of learning invariant statistics of chaotic dynamical systems. It is motivated by the observation that training neural operators by standard MSE loss focuses on short-term predictability and thus may miss an attractor’s properties. Two ways of augmenting the MSE loss for capturing invariant stats are discussed: One based on the Wasserstein distance between prob distributions across explicitly provided stats (physics-informed), and one based on contrastive feature learning. Experiments on chaotic benchmarks, the Lorenz-96 and the Kuramoto-Sivashinsky equations, are shown. Strengths: The paper is timely and addresses an important issue, namely how we can capture crucial properties of an observed dynamical system beyond just short-term predictions. In general I like the approach, and think this could be a very fruitful addition to the current state. In general, the paper is also well written, and the ideas well motivated. Although incorporating invariant stats into the loss has recently been discussed, this paper offers a different perspective on this topic. Weaknesses: On the other hand, the experimental evaluations are, in my view, quite weak, and also a lot of related and relevant literature is not discussed. The experiments primarily show that on any of the evaluation measures that particular method wins, for which exactly that measure has been included or accentuated in the loss function: If the loss is only MSE, the only-MSE trained method wins on short-term predictions; if the loss includes invariant statistics, the method wins on exactly those invariant stats. This is not surprising, but would be expected for almost anything explicitly included in the loss term (i.e., the method for which a specific property was included in the loss should have an edge when evaluated on that particular property). It would have been much more convincing if methods which include invariant statistics could also reproduce many *other* properties of the attractor that were *not* explicitly included in the loss, like its Lyapunov spectrum or fractal dimension. Besides, two of the columns in Table 1 \& 2 don’t contain any indication of statistical error and are therefore somewhat meaningless in my mind (are the differences significant or negligible?). To see how the methods perform on at least one empirical chaotic problem would have also been nice. Literature-wise, the idea of including invariant statistics in the loss has been considered by a number of recent publications, for instance https://arxiv.org/pdf/2304.12865.pdf or https://pubs.aip.org/aip/cha/article-abstract/33/6/063152/2900453/Learning-dynamics-on-invariant-measures-using-PDE?redirectedFrom=fulltext. The fact that purely MSE-based methods have difficulty capturing invariant stats for chaotic systems has recently also been extensively discussed in https://proceedings.neurips.cc/paper_files/paper/2022/hash/495e55f361708bedbab5d81f92048dcd-Abstract-Conference.html and, relatedly, https://arxiv.org/abs/2306.04406, although these authors arrive at different conclusions of how to tackle the problem. My perception is that this problem has received a lot more attention recently than the authors’ related works section makes one believe. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: My primary concern is really the weaknesses in the experimental evaluations as listed above. This sect. was a bit disappointing to read after the exciting start of the paper and presentation of methods. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 3 good Contribution: 4 excellent Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your feedback! **W1**: > The experiments primarily show that on any of the evaluation measures that particular method wins… We propose emulators with much more consistent long-term behavior and that accurately capture the dynamics of the chaotic system. We make significant gains in long-term behavior, as measured by the evaluation statistics, while retaining very similar short-term prediction performance. Both methods (OT and CL) achieve these benefits, with the contrastive method requiring no prior knowledge and being faster. For the CL method, we do not provide any explicit statistics during training. Instead, the contrastive pre-training automatically learns useful invariant statistics. For the OT approach, we show that training using a chosen set of informative summary statistics improves performance on those statistics as well as other statistics (e.g. the energy spectrum, Lyapunov exponent) not used during training. **W2**: > It would have been much more convincing if methods… reproduce… Lyapunov spectrum or fractal dimension. We have included estimates of the leading Lyapunov exponent and the fractal dimension in our updated results (Table R1). Note that our method does not aim to regularize dynamical features, such as the Lyapunov exponents, which are distinct from time-invariant statistics. Despite this, we see significant improvements to the model’s leading Lyapunov exponent in high-noise settings. Since our data consists of high-dimensional chaotic attractors, it is very difficult to obtain reliable estimates of the full Lyapunov spectrum and the fractal dimension [1]. However, we do see evidence that the estimated fractal dimension of our CL approach is better than the baseline. **W3**: > Besides, two of the columns in Table 1 & 2 don’t contain any indication of statistical error… Thank you for pointing this out. We have now included quantile intervals in our updated tables. Our results show significantly better performance on long-term statistics while retaining similar short-term prediction performance. **W4**: > To see how the methods perform on at least one empirical chaotic problem would have also been nice. Our work, like almost all others in this area, uses simulated data for validation. The datasets we currently use are representative of high-dimensional spatiotemporal chaos. It is quite difficult to find publicly available empirical datasets of fully-observed chaotic dynamical systems, especially ones showing spatiotemporal chaos. This is why most papers on spatiotemporal emulators are evaluated on simulated data. One publicly available dataset is global weather data, which we are excited to try but it would require significant resources. **W5**: > Literature-wise, the idea of including invariant statistics in the loss has been considered by a number of recent publications… We first note that 3 of the 4 papers ([1], [2], and [4]) should be considered contemporaneous work under the NeurIPS guidelines. That said, we would be happy to cite and add a discussion of all four papers to better clarify our work in light of very recent developments in the field. Two of the suggested papers ([3], [4]) focus on training traditional RNNs to emulate chaotic dynamics. Their methods and the ones presented in our submission have similar training protocols for the rMSE loss. Key differences between these papers and our approaches are (a) they only work in the noise-free setting whereas we assume noisy observations, (b) they only use a short-term rMSE loss, and (c) they evaluate their method in the easier single-environment setting. In our experiments, we see that rMSE is a fine loss function in noise-free settings (consistent with the conclusions of [3],[4]), but this loss fails to preserve chaotic attractors in noisy settings. [1] focuses on reservoir computing—an RNN-like architecture with a single trained output layer—and trains using known dynamical invariants, e.g. Lyapunov exponents. Training using the Lyapunov exponents (a) requires significant prior physical knowledge or (b) empirical estimates of the Lyapunov exponents, which are difficult to estimate stably from data, particularly in the presence of noise and high dimensions. Our updated evaluation results (Table R1) show that our training methods result in accurate estimates of the Lyapunov exponent even when it’s not used during training. The method for modeling chaotic systems suggested in [2] requires solving a PDE for the probability density of the state and differs significantly from our proposed approaches, which are based on chosen or learned statistics. In particular, [2] scales poorly to higher dimensional state spaces, like the spatiotemporal dynamics that we consider, because even explicitly representing a high-dimensional probability distribution—let alone solving a high-dimensional PDE using a mesh—suffers from the curse of dimensionality. This is likely why the experiments in [2] focus on low-dimensional (up to 3) dynamical systems since solving a 3D PDE (i.e., a PDE with three “channels” like Lorenz-63) can already be numerically challenging. In contrast, our experiments are on high-dimensional spatiotemporal systems with state space dimensions ranging from 60 to 256. [1] Constraining Chaos. Platt et al, April 2023. [2] Learning dynamics on invariant measures using PDE-constrained optimization. Botvinick-Greenhouse et al, Chaos, June 2023. [3] On the difficulty of learning chaotic dynamics with RNNs. Mikhaeil et al, NeurIPS 2022. [4] Generalized Teacher Forcing for Learning Chaotic Dynamics. Hess et al, June 2023. **Q1**: > My primary concern is really the weaknesses in the experimental evaluations as listed above. Please see the global rebuttal. We hope that our additional evaluation metrics and the above discussion address your concerns! --- Rebuttal Comment 1.1: Title: remaining points Comment: I thank the authors for their response which clarified some of my points. A few remaining remarks: - W1: This doesn't really address my point, I think. My argument was that it's quite expected that if you optimize method 1 for property A, then method 1 will outperform others not optimized for A. The additional evaluations on LE and FD mostly address this point, however. - W2: Since this is all model-based, you have the Jacobians, so why can't you assess the whole LE spectrum? - W4 \& W5: The authors state as a key difference to [4,5] that these do not involve settings with noise. Didn't check this, but what I do recall is that [4,5] used several *empirical* datasets which I would assume contain a lot of noise! So this falsifies the authors' statements about both the noise and the unavailability of empirical datasets with characteristics of chaos. --- Reply to Comment 1.1.1: Title: Response to your questions (1/2) Comment: Dear reviewer, we greatly appreciate your valuable feedback! Please allow us to address your comments as follows. > W1: This doesn't really address my point, I think. My argument was that it's quite expected that if you optimize method 1 for property A, then method 1 will outperform others not optimized for A. The additional evaluations on LE and FD mostly address this point, however. We agree with this point. To some extent, the OT method can be thought of as an “oracle” approach that lets us measure a best-case level of performance that we would aim for with a method that cannot be optimized for property A. This is why we chose to include performance on property A among our results. But the results are certainly strengthened with your suggestions of LE and FD. It is also important to note that our OT method also performs significantly better than the baseline when we evaluated the performance using Fourier energy spectrum, and the leading Lyapunov exponent in the high noise setting. In addition, new experiments with our OT method (also mentioned in our discussion with Reviewer DhyB) demonstrate that OT could also improve performance when using limited knowledge of the invariant statistics, which implies the better generalization ability of the OT approach. Last, we want to emphasize our CL approach, not designed explicitly on any evaluation metrics (with the absence of physics-informed prior knowledge), delivers superior performance than the rMSE baseline on all four metrics we evaluated on. > W2: Since this is all model-based, you have the Jacobians, so why can't you assess the whole LE spectrum? Thank you for your suggestion! We initially focused on obtaining LLEs since it is a much more straightforward calculation, but we have now obtained estimates for the full Lyapunov spectrum, as requested. We present the results regarding the Lyapunov spectrum in the table below. For the Lyapunov Spectrum Error, we report the sum of relative absolute errors across the full spectrum: $\sum_i^d |\hat{\lambda}_i - \lambda_i| / |\lambda_i|$, where $d$ is the dimension of the dynamical state (i.e., 60 for Lorenz 96). As suggested by [5], we also compare the number of positive Lyapunov exponents (LEs) as an additional statistic to measure the complexity of the chaotic dynamics. We compute the absolute error in the number of positive LEs $|\sum_i^d \mathbf{1}(\hat{\lambda}_i > 0) - \sum_i^d \mathbf{1}(\lambda_i > 0)|$ averaged over the test instances. |r| Training | Leading LE Error $\downarrow$ | Lyapunov Spectrum Error $\downarrow$| Total number of positive LEs Error $\downarrow$| |-------------|-------------|-------------|-------------|-------------| |0.1 |$\ell_{\rm rMSE}$ | **0.014** (0.006, 0.021) | 0.388 (0.110, 0.309) | 0.526 (0.000, 1.000) | |0.1 | $\ell_{\rm sinkhorn} + \ell_{\rm rMSE} $| 0.049 (0.040, 0.059) | **0.256** (0.168, 0.285) | 0.375 (0.000, 1.000) | |0.1 | $\ell_{\rm feature} + \ell_{\rm rMSE} $| 0.065 (0.058, 0.073) |0.285 (0.164, 0.289) | **0.365** (0.000, 1.000) | |||||| |0.2 | $\ell_{\rm rMSE}$ | 0.175 (0.156, 0.191) | 1.940 (0.522, 0.726) | 4.248 (4.000, 5.000) | |0.2 |$\ell_{\rm sinkhorn} + \ell_{\rm rMSE} $| 0.019 (0.006, 0.030) | 0.837 (0.122, 0.590) | **2.540** (2.000, 3.000) | |0.2 |$\ell_{\rm feature} + \ell_{\rm rMSE} $| **0.012** (0.006, 0.018) | **0.769** (0.138, 0.568) | 2.819 (2.000, 3.000) | |||||| |0.3 |$\ell_{\rm rMSE}$ | 0.446 (0.425, 0.463) | 1.979 (0.702, 0.939) | 7.230 (7.000, 8.000) | |0.3 |$\ell_{\rm sinkhorn} + \ell_{\rm rMSE} $| 0.101 (0.062, 0.134) | **1.186** (0.571, 0.745) | **4.824** (4.000, 6.000) | |0.3 |$\ell_{\rm feature} + \ell_{\rm rMSE} $| **0.067** (0.045, 0.091) | 1.290 (0.558, 0.780) | 5.603 (5.000, 6.000) | Table 1. **Performance with varying noise level on Lorenz-96 evaluated on Lyapunov spectrum.** The average (25th, 75th percentile) error rates over 200 testing instances of training the neural operator with (1) only rMSE; (2) Sinkhorn (OT) and rMSE (using prior knowledge to choose summary statistics); and (3) contrastive feature loss (CL) and rMSE (no prior knowledge used). In the presence of high noise, OT and CL give lower relative errors on the Leading Lyapunov exponent (LLE). When evaluating the full Lyapunov spectrum, OT and CL show significant advantages than the baseline. In addition, the lower absolute errors of the total number of the positive Lyapunov exponents (LEs) suggest that OT and CL are able to match the complexity of the true chaotic dynamics.
null
null
null
null
null
null
TMT-VIS: Taxonomy-aware Multi-dataset Joint Training for Video Instance Segmentation
Accept (poster)
Summary: The paper proposes TMT-VIS, combining multiple VIS datasets for a unified VIS model. Specifically, TMT-VIS introduces taxonomy embedding as prompts to make the model aware of taxonomy from different datasets. Experiments on YVIS-2019, YVIS-2021, OVIS, and UVO demonstrate the effectiveness of dataset unification. Strengths: + The proposed method is straightforward and easy to follow. The performance under joint training is good. + Unifying multiple datasets into a single model is of great value. Weaknesses: - The authors claim that "TMT-VIS is the first DETR-style framework that can jointly train multiple video instance segmentation datasets with such a huge improvement." However, previous work UNINEXT [1] also train multiple VIS dataset and achieves a performance gain with a DETR-style framework, which is missed in the related work. - Tab.3 is confusing. The authors are recommended to evaluate all three datasets in this table to show the performance gain under joint training. Furthermore, the authors claim that Mask2Former-VIS meets difficulties in dealing with datasets, decreasing performance under joint training. Does it indicate the ID-I and ID-V in Tab.3? It is worth noticing that TMT-VIS jointly trained on YTVIS and UVO also decreases compared to the model trained on a single dataset. In other words, the proposed TMT-VIS does not overcome performance drop under joint training. It is ambiguous whether the performance gain is caused by the proposed method boosting the performance on a single dataset or alleviating the conflict in multi-dataset training. - The ablation on the size of taxonomic embedding is not sufficient. For example, what's the performance using all embeddings? Does it have negative effects when the topk taxonomic embeddings miss the gt classes? - Please provide more details of the taxonomy-aware matching loss. [1] Universal Instance Perception as Object Discovery and Retrieval Technical Quality: 3 good Clarity: 3 good Questions for Authors: please see my listed questions in the weakness part. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: The authors have adequately addressed their work's limitations and potential negative societal impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: At the beginning, we want to thank you for the detailed, insightful and constructive comments. ### UNINEXT Thanks for pointing out our wrong claim. We acknowledge that the UNINEXT is the first DETR-based method which jointly trains multiple VIS datasets. It simply utilizes BERT language encoder to generate language embeddings of categories from all video datasets, and fuse the information with visual embeddings through a simple bi-directional cross-attention module. However, we argue that we are the current SOTA method in jointly training multiple VIS datasets. Our method, on the other hand, adjusts the language embeddings via a taxonomy compilation module which consists of a spatio-temporal adapter and a FFN network, as well as of a taxonomy injection module which fuses the taxonomy information into video queries with a multi-level cross-attention and self-attention layers. Moreover, it achieves a lower performance of 64.3 AP (ours is 64.9 AP) with more training data on Youtube-VIS 2019. Table 5-1. Comparison with UNINEXT on YTVIS-19. | Method | Datasets Number | Training time | Backbone | AP | | ------------- | :-------------: | -------------- | ---------- | ---- | | UNINEXT | 8 | 3 days | ConvNext-L | 64.3 | | TMT-VIS(Ours) | 4 | 1 day 12 hours | Swin-L | 64.9 | ### Ablation study on the size of taxonomic embedding Thanks for your careful suggestions about the size of taxonomic embedding. We conducted more ablation studies on this based on the ResNet-50 backbone, and the experiment is tested on YTVIS-19 dataset. As the results shown in Table 5-2, the size of taxonomic embedding is crucial to the performance. When the size is set to 1 or 2, which is apparently less than the overall categories in input video, topk taxonomic embeddings miss the gt classes. Compilation of such taxonomic information and injecting them into video queries provide no guidance to the convergence but diverge the attention of queries to irrelevant categories, and thus results in no improvement and even degradation in the final results. When using all embeddings, the improvement is trivial since TCM can not provide filtration to irrelevant classes, and the TIM module is simply injecting the information of the whole category space to queries. Table 5-2. Ablation study on sizes of taxonomic embedding set in TCM. | Size | AP | Size | AP | | ---- | :--: | ---- | :--: | | 1 | 47.1 | 15 | 49.2 | | 2 | 47.5 | 20 | 48.5 | | 5 | 49.3 | 50 | 47.4 | | 10 | 49.7 | 100 | 47.2 | ### The reorganization of Tab.3 Thanks for your advice on our table. In this part, we update these experiments and reorganize Tab.3, and the results are shown in the rebuttal pdf. We present Tab.3 to demonstrate that due to the heterogeneity of category spaces of multiple VIS datasets, simply training on multiple datasets with an aggregated category space will not improve the performance significantly. The performance may even drop due to the huge imbalance of data, and that ID-I and ID-V in Tab.3 is an example in showing this phenomenon. ### Performance degradation when joint training UVO and YTVIS YTVIS-19 takes advantage of an existing dataset called YouTube-VOS. OVIS is collected specifically for video instance segmentation in occluded scenes. UVO, on the other hand, adopts videos from Kinetics-400, which are human-centric and contain diverse sets of human actions and human-object interactions, and UVO is densely annotated with many more instances. The key statistics of these datasets are shown in the rebuttal pdf, which corresponds to the Tab.1 of the supplementary file. As illustrated in the table, the imbalance between these VIS datasets is significant. When jointly training YTVIS-19 with UVO, the model tends to converge to fit UVO, which leads to the degradation of performance on validation on YTVIS-19. When jointly training YTVIS-19 and UVO with OVIS, the imbalance is alleviated, and results in a notable performance improvement. It’s worth noting that due to the scale of VIS datasets, especially YTVIS-19, the final results have a fluctuation of approximately 0.3AP, and this may also be the direct reason for the seemingly drop of our TMT-VIS compared with Mask2Former-VIS because in other training settings our TMT-VIS significantly outperforms Mask2Former-VIS. ### More details of the taxonomy-aware matching loss The formula of the taxonomy-aware matching loss is shown as follows: $$ \mathcal{L} = \sum\mathcal{L}^{\text{ce}}(\mathbf{P^c}, \mathbf{G^c}) + \sum\mathcal{L}_{inj}^{\text{ce}}(\mathbf{P^{c^{\prime}}},\mathbf{G^{c^{\prime}}}) + \sum \mathcal{L}^{\text{dice}}(\mathbf{P^m}, \mathbf{G^m}) + \sum \mathcal{L}^{\text{bce}}(\mathbf{P^m}, \mathbf{G^m}) $$ where $\mathcal{L}^{\text{ce}}$ denotes the cross-entropy loss for classification, $\mathcal{L}_{inj}^{\text{ce}}$ denotes the extra `Taxonomy-aware matching loss',which is also cross-entropy loss, except for the predictions and gts being different. The $\mathcal{L}^{\text{dice}}$,$\mathcal{L}^{\text{bce}}$ denotes the dice loss and binary cross-entropy loss between mask predictions and matched ground truths, which are the same as the strategy in Mask2Former, where we sample different sets of $K=12544=122 \times 122$ points for different pairs of prediction and ground truth using importance sampling. Here $\mathbf{P}$ is the prediction, and $\mathbf{G}$ is the ground truth, $\mathbf{c}$ refer to classification while $\mathbf{m}$ refer to mask, and $\mathbf{c^{\prime}}$ refers to classification right after the injection process. $\mathbf{G^{c^{\prime}}}$ refers to ground truth categories to the taxonomy represented by the predicted set of taxonomic embeddings. By adding this supervision, we further guarantee the guidance provided by the compiled taxonomic embeddings. --- Rebuttal Comment 1.1: Comment: I would like to thank the authors for the detailed rebuttal. Most of my concerns are addressed. The proposed method shows non-trivial performance in unifying multiple video segmentation datasets into a single model. However, I agree with Revier 5w4z's concern about the uniqueness of video tasks. For instance, UNINEXT unifies not only VOS but also other detection, segmentation, and reference comprehension tasks on both video and image. It is somehow unclear the motivation for how the proposed method target boosting VOS instead of image segmentation, or the authors just follow the trend of multi-dataset training. I expect the authors to provide deep insights into how the proposed method effectively unifies the multi-dataset VOS and boosts the performance of each dataset. --- Reply to Comment 1.1.1: Comment: We extend our deepest appreciation for the invaluable time and selfless efforts you have dedicated to reviewing and providing insightful comments on our paper. In fact, our research focus remains on VIS task rather than image instance segmentation, and we are open for the results (no matter improving or degrading) on jointly training image datasets. Our research motivation is that current image instance segmentation datasets are significantly larger than VIS datasets. LVIS, for example, has 160k images and 2M instance mask annotations. The scenarios contained in these images are thus more varied than VIS datasets. Such image dataset is so large and dominant that jointly training multiple image instance seg datasets become less exciting and influential. On the other hand, VIS datasets are smaller in scale, and what we possess are numerous isolated field-specific datasets, rather than dominant scale datasets. Due to this fact, combining these varied VIS datasets and training them jointly become useful and meaningful for VIS research. Also, in our early experiments we noticed that adapting pretrained MAE-based weights (ImageMAE \& VideoMAE ) to Mask2Former structure is unsuccessful (please refer to Table 1). Based upon these, we are motivated to research on applying joint training methods in VIS training, hoping to unify these separate specific datasets. A major challenge in multiple-dataset joint training is the heterogeneity of multiple datasets’ category space. As a result of the difference in category space, though mask precision increases with the data volume, dataset biases might hinder models from generalization: simply utilizing multiple datasets will dilute the attention of models on different categories. Therefore, increasing the data scale and enriching label space while improving classification precision become a huge challenge for researchers. This phenomenon was shown in our ablation studies. Our method, adjusts the language embeddings via a taxonomy compilation module which consists of a spatio-temporal adapter and a FFN network, as well as of a taxonomy injection module which fuses the taxonomy information into video queries with a multi-level cross-attention and self-attention layers. In the taxonomy compilation module, taxonomy embeddings are designed to interact with video features in order to unearth the potential taxonomy contained in each of the video frames. After this, we calculated the dot product between the different modulated taxonomic embeddings to predict the most relevant taxonomy in the given video. By compiling and injecting the possible taxonomic information to queries as guidance, queries tend to converge to the desired instances and desired categories faster and finally come up with better precision. Also, when we constrain the most relevant taxonomy of input video, we no longer have to worry about the full heterogeneous category space. As depicted in Table 2, even when training categories that only exist in one VIS dataset, our method still manages to enhance its Average Precision (AP). Table 1. Experiments on Adapting ImageMAE \& VideoMAE to VIS tasks | Method | Backbone | AP | | ------------ | -------- | ---- | | M2F | SWIN-B | 59.5 | | ImageMAE+M2F | VIT-B | 53.3 | | VideoMAE+M2F | VIT-B | 27.1 | Table 2. Comparisons between per-category performance of Mask2Former-VIS and TMT-VIS. ‘MDT’ refers to ‘Multiple Datasets Training’, indicating whether the approach is trained on YTVIS, OVIS, and UVO. The '√' in the middle column is used to demonstrate whether the category is contained in corresponding dataset. For example, all datasets have the 'person' category, and thus all datasets have the corresponding '√'. | Categories | Methods | YTVIS | OVIS | UVO | MDT | AP | | ---------- | ------- | :---: | :--: | :--: | :--: | :--: | | Person | M2F | √ | | | | 57.2 | | | TMT-VIS | √ | | | | 57.9 | | | M2F | √ | √ | √ | √ | 59.3 | | | TMT-VIS | √ | √ | √ | √ | 60.7 | | Duck | M2F | √ | | | | 41.6 | | | TMT-VIS | √ | | | | 42.4 | | | M2F | √ | | | √ | 38.3 | | | TMT-VIS | √ | | | √ | 43.9 | | Monkey | M2F | √ | | | | 24.7 | | | TMT-VIS | √ | | | | 26.7 | | | M2F | √ | √ | | √ | 25.6 | | | TMT-VIS | √ | √ | | √ | 29.1 | | Snowboard | M2F | √ | | | | 8.9 | | | TMT-VIS | √ | | | | 11.8 | | | M2F | √ | | √ | √ | 10.0 | | | TMT-VIS | √ | | √ | √ | 14.5 |
Summary: Due to the lack of large-scale datasets in the VIS task, the authors propose a multi-dataset joint training method. Due to the heterogeneity of the category spaces of different datasets, simply stacking datasets may lead to performance degradation. For this situation, the author designs a two-stage classification aggregation module, which first compiles the classification information of videos from different datasets, and then Aggregating these categorical information translates categorical priors into instance queries for better performance. The two-stage modules include a classification compilation module and a classification injection module. The former compiles classification information, and the latter utilizes TCM classification information to inject guidance for queries. Experiments prove the effectiveness of this method. Strengths: The main advantage of this paper is that the proposed joint training model for multiple datasets, which uses taxonomy information to mitigate the heterogeneity in the category space of different datasets, has achieved good results and can try to migrate this approach to other fields. Weaknesses: The paper lacks clear explanations on the description of the module and the loss function. In the experimental comparison, the performance in the ovis dataset is not satisfactory. Furthermore, the generalization ability of this work on different datasets is limited. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: 1.In Taxonomy Compilation Module, what is the specific form or dimension of label space like? Is it a set of category names? 2. Taxonomy-aware matching loss, mask loss, and cls loss - what are the specific formulas for these three losses? 3.In Table 2, Can the effectiveness of the proposed method be verified as it has lower performance on OVIS dataset compared to both IDOL and GenVIS methods? 4.In Experiment V in Table 3, when jointly training with the Youtube-VIS 2019 and UVO datasets, the Mask2Former-VIS method exhibits a decrease of 0.2 in AP compared to using only YTVIS, but it shows an improvement of 3.2 in AP50. However, the TMT-VIS method experiences a decrease of 0.3 in AP and 0.5 in AP50. Can we conclude that the model lacks robustness to differences between different datasets? Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: Please see the weakness. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: At the beginning, we want to thank you for the detailed, insightful and constructive comments. ### Label space in Taxonomy Compilation Module The specific dimensions of label space are $K \times D$, where $K$ refers to the total number of categories in datasets, and $D$ represents the hidden dimensions of outputs of text encoders. ### Formula of taxonomy-aware matching loss, mask loss, and classification loss The formula of different losses are shown below: $$ \mathcal{L} = \sum\mathcal{L}^{\text{ce}}(\mathbf{P^c}, \mathbf{G^c}) + \sum\mathcal{L}_{inj}^{\text{ce}}(\mathbf{P^{c^{\prime}}},\mathbf{G^{c^{\prime}}}) + \sum \mathcal{L}^{\text{dice}}(\mathbf{P^m}, \mathbf{G^m}) + \sum \mathcal{L}^{\text{bce}}(\mathbf{P^m}, \mathbf{G^m}) $$ where $\mathcal{L}^{\text{ce}}$ denotes the cross-entropy loss for classification, $\mathcal{L}_{inj}^{\text{ce}}$ denotes the extra ‘Taxonomy-aware matching loss’, which is also cross-entropy loss, except for the predictions and gts being different. The $\mathcal{L}^{\text{dice}}$,$\mathcal{L}^{\text{bce}}$ denotes the dice loss and binary cross-entropy loss between mask predictions and matched ground truths, which are the same as the strategy in Mask2Former, where we sample different sets of $K=12544=122 \times 122$ points for different pairs of prediction and ground truth using importance sampling. Here $\mathbf{P}$ is the prediction, and $\mathbf{G}$ is the ground truth, $\mathbf{c}$ refer to classification while $\mathbf{m}$ refer to mask, and $\mathbf{c^{\prime}}$ refers to classification right after the injection process. $\mathbf{G^{c^{\prime}}}$ refers to ground truth categories to the taxonomy represented by the predicted set of taxonomic embeddings. By adding this supervision, we further guarantee the guidance provided by the compiled taxonomic embeddings are successfully injected. ### The effectiveness of TMT-VIS on OVIS Our method is built upon Mask2Former and VITA, which were the SOTA offline method. Also, our module can be added to SOTA method Gen-VIS \& IDOL, which will also boost their performance significantly, as shown in Table.4-1. The results verify the effectiveness of our proposed method, which can be plug-and-play to both online and offline VIS solutions. Also, the parameters of our added design are not tuned, so that the performance could be even higher with further experiments. Table 4-1. Experiments on the effectiveness of TMT-VIS on OVIS. | Method | Multi-dataset? | w/o our design | Backbone | AP | | ------ | :--------------: | :------------: | --------- | :--: | | IDOL | | | ResNet-50 | 30.2 | | IDOL | √ | | ResNet-50 | 32.1 | | IDOL | √ | √ | ResNet-50 | 33.6 | | GenVIS | | | ResNet-50 | 35.8 | | GenVIS | √ | | ResNet-50 | 37.3 | | GenVIS | √ | √ | ResNet-50 | 38.4 | ### Robustness of our method across various datasets It's true that there are degradation when jointly training UVO and YTVIS-19, but we argue that in most of the training settings TMT-VIS improves significantly better than Mask2Former (see the table in rebuttal pdf), and so this result can't indicate that our method lacks robustness. In the following we try to explain the reason why this may occur. The drop in performance may be caused by the imbalance in scale among VIS datasets. YTVIS-19 takes advantage of an existing dataset called YouTube-VOS. OVIS is collected specifically for video instance segmentation in occluded scenes. UVO, on the other hand, adopts videos from Kinetics-400, which are human-centric and contain diverse sets of human actions and human-object interactions, and UVO is densely annotated with many more instances. The key statistics of these datasets are shown in the following table, which corresponds to the Table.1 of the supplementary file. As illustrated in the table, the imbalance between these VIS datasets is significant. When jointly training YTVIS-19 with UVO, the model tends to converge to fit UVO, which leads to the degradation of performance on validation on YTVIS-19. When jointly training YTVIS-19 and UVO with OVIS, the imbalance is alleviated, and results in a notable performance improvement. It’s worth noting that due to the scale of VIS datasets, especially YTVIS-19, the final results usually have a fluctuation of approximately 0.3AP, and this may also be the direct reason for the seemingly drop of our TMT-VIS compared with Mask2Former-VIS because in other training settings our TMT-VIS significantly outperforms Mask2Former-VIS. Table 4-2. Key statistics of multiple VIS datasets. | | YT19 | YT21 | OVIS | UVO | | ---------------- | ---- | ---- | ---- | ------ | | Videos | 2883 | 3859 | 901 | 11228 | | Categories | 40 | 40 | 25 | 81 | | Instances | 4883 | 8171 | 5223 | 104898 | | Masks | 131k | 232k | 296k | 593k | | Masks per Frame | 1.7 | 2 | 4.7 | 12.3 | | Objects per Video | 1.6 | 2.1 | 5.8 | 9.3 | --- Rebuttal 2: Comment: The author made corresponding explanations and experiments on the robustness of the method. Whether the authors found performance fluctuations caused by dataset imbalance in the original work. In response to this problem, the author does not seem to solve the problem positively, but adopts another joint training strategy. Therefore,combining the comments of other reviewers and the author's responses, I still maintain the original score. --- Rebuttal Comment 2.1: Comment: We deeply appreciate the valuable time and selfless efforts you have dedicated to reviewing and commenting our paper. Dataset imbalance can be roughly divided into two types: category imbalance and dataset scale imbalance. While it's true that scale imbalance could potentially impact the joint training of multiple datasets, we want to clarify that this isn't our primary research focus. In our methodology, we employ a widely used weighted sampling strategy to mitigate the imbalance across various VIS datasets, with ablation studies related to specific hyperparameters of this strategy detailed in Table 5 of our initially submitted paper. Our main research focus is on category imbalance (also referred to as class imbalance in some literature). We address this issue using a two-stage taxonomy aggregation module consisting of: the Taxonomy Correlation Module (TCM), designed to uncover potential taxonomy within each video frame by interacting taxonomic embeddings with video features; and the Taxonomy Integration Module (TIM), developed to aggregate modulated taxonomic embeddings that carry the most relevant taxonomy information. Some categories demonstrate marked improvements (as shown in our supplementary material). As depicted in Table 1, even when training categories that only exist in one VIS dataset, our method still manages to enhance its Average Precision (AP) despite significant category imbalance. Table 1. Comparisons between per-category performance of Mask2Former-VIS and TMT-VIS. ‘MDT’ refers to ‘Multiple Datasets Training’, indicating whether the approach is trained on YTVIS, OVIS, and UVO. The '√' in the middle column is used to demonstrate whether the category is contained in corresponding dataset. For example, all datasets have the 'person' category, and thus all datasets have the corresponding '√'. | Categories | Methods | YTVIS | OVIS | UVO | MDT | AP | | ---------- | ------- | :---: | :--: | :--: | :--: | :--: | | Person | M2F | √ | | | | 57.2 | | | TMT-VIS | √ | | | | 57.9 | | | M2F | √ | √ | √ | √ | 59.3 | | | TMT-VIS | √ | √ | √ | √ | 60.7 | | Duck | M2F | √ | | | | 41.6 | | | TMT-VIS | √ | | | | 42.4 | | | M2F | √ | | | √ | 38.3 | | | TMT-VIS | √ | | | √ | 43.9 | | Monkey | M2F | √ | | | | 24.7 | | | TMT-VIS | √ | | | | 26.7 | | | M2F | √ | √ | | √ | 25.6 | | | TMT-VIS | √ | √ | | √ | 29.1 | | Snowboard | M2F | √ | | | | 8.9 | | | TMT-VIS | √ | | | | 11.8 | | | M2F | √ | | √ | √ | 10.0 | | | TMT-VIS | √ | | √ | √ | 14.5 | --- Rebuttal Comment 2.2: Comment: Please expand on why if 3/4 other reviews were positive why you still strongly believe the paper should be rejected? What reasons for acceptance do you disagree with from the other positive reviewers? --- Rebuttal Comment 2.3: Comment: Dear Reviewer VJ6M, We express our heartfelt appreciation for the valuable time and selfless dedication you have invested in reviewing our paper. As the deadline for discussion draws near, we would like to inquire if our response adequately addressed your concerns. We are eager to engage in further discussion. Best regards, Authors of paper 11160
Summary: This paper works on multi-dataset training on video instance segmentation. The authors built on top of Mask2Former, and introduced two components to enable the model to work under different label sets, and take advantages of the given label sets. The two components are both ablated in experiments. The overall framework improved Mask2Former on three popular datasets and achieved state-of-the-art performance. Strengths: - The problem on training a video instance segmentation model on multiple datasets is important. This paper takes a good step in this direction. The overall framework makes sense to me. - The results are strong. The improvements over the Mask2Former and VITA baselines are significant, and are consistent under three datasets and two backbones. - The contributions are well ablated in Table 4 - 7. - The paper is well written and easy to follow. Weaknesses: - The most important Table, Table 3, is extremely hard to read. The numbers of different rows are on different datasets, making it hard to compare across rows. PLEASE split the columns by datasets. Please report numbers on all datasets, which should be feasible as the model used a CLIP classifier. We can remove AP50/ AP75 if space is a constraint. - Please clarify: does the model require a known vocabulary during inference? Can it test on in-the-wild images (e.g., using a unified vocabulary)? - [Optional] One other interesting aspect of multi-dataset training in VIS is the difference between the framerate/ image size across different datasets. If will be great if the authors can provide some discussion on this (if this is not trivial). Technical Quality: 3 good Clarity: 3 good Questions for Authors: Overall the paper works on an important task, proposes a valid model, and has great results. My concerns are mostly on presentation and clarity (see weekness). Please clarify them in the rebuttal. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: At the beginning, we want to thank you for the detailed, insightful and constructive comments. ### Reorganization of Table 3 In Table 5 of supplementary material, we have posted several experiments of zero-shot performance of our methods when compared with previous methods. In this part, we update these experiments and reorganize Table 3 in the submission file. The updated version is shown below: Table 3-1. Ablation study on training with multiple VIS datasets. | ID | Method | YTVIS$_{train}$ | OVIS$_{train}$ | UVO$_{train}$ | YTVIS$_{val}$ | OVIS$_{val}$ | UVO$_{val}$ | | :--: | :-----: | :---: | :--: | :--: | :---: | :--: | :--: | | 1 | M2F | √ | | | 46.4 | 2.3 | 1.9 | | 2 | M2F | | √ | | 5.2 | 16.5 | 3.6 | | 3 | M2F | | | √ | 4.4 | 2.5 | 18.2 | | 4 | M2F | √ | √ | | 47.3 | 17.4 | 4.9 | | 5 | M2F | √ | | √ | 46.2 | 3.7 | 19.0 | | 6 | M2F | | √ | √ | 7.1 | 16.6 | 18.7 | | 7 | M2F | √ | √ | √ | 47.2 | 17.2 | 19.3 | | 8 | TMT-VIS | √ | | | 47.3 | 7.2 | 6.5 | | 9 | TMT-VIS | | √ | | 10.5 | 17.8 | 8.0 | | 10 | TMT-VIS | | | √ | 10.1 | 8.6 | 18.8 | | 11 | TMT-VIS | √ | √ | | 48.8 | 20.9 | 10.1 | | 12 | TMT-VIS | √ | | √ | 47.0 | 10.3 | 20.4 | | 13 | TMT-VIS | | √ | √ | 14.8 | 19.4 | 20.2 | | 14 | TMT-VIS | √ | √ | √ | 49.7 | 22.8 | 21.2 | ### Test on in-the-wild image The tests on in-the-wild datasets can refer to the first answer. Unfortunately there are no other in-the-wild VIS datasets, and so we tested the performance on another in-the-wild dataset from a video-related task, which further proves the transferability of our design. We tested our method on VIPSeg[1] based on the popular Video K-Net[2], the results are shown in Table. VIPSeg is a large-scale dataset for video panoptic segmentation (VPS)[3], a task which aims to simultaneously predict object classes, bounding boxes, masks, instance id associations, and semantic segmentation in video frames. There are a total of 3,536 videos with 84,750 pixel-wise annotated frames in VIPSeg, which covers 232 scenarios with 124 categories, including 58 things’ classes and 66 stuff’s classes. Here, VPQ and STQ are both metrics from the VPS task. With our additional design, we compiled taxonomic information and injected them to thing and stuff kernels, and trained YTVIS and VIPSeg jointly, with the total training epochs set to 12. The results are shown below, with our design, the performance of Video K-Net increases with a 2.3% VPQ improvement and 2.6% STQ improvement. This further demonstrates the transferability of our design in in-the-wild scenarios. Table 3-2. Results on VIPSeg Dataset. | Method | Multiple Dataset? | Backbone | VPQ | STQ | | ----------- | :---------------: | --------- | ---- | ---- | | Video K-Net | | ResNet-50 | 26.1 | 33.1 | | Video K-Net | √ | ResNet-50 | 27.6 | 34.8 | | TMT-VIS | √ | ResNet-50 | 28.4 | 35.7 | ### Vocabulary in inference In the inference part, our SOTA performance method utilizes the given vocabulary of the dataset. However, our method could still perform well with a unified vocabulary, which is credit to the utilization of CLIP encoder, just as Table 3-1 shows. ### Different frame rate across different datasets We resample the VIS datasets to a different frames per second(fps) when training on multiple datasets, and the results are shown as follows. Worth mentioning, all VIS datasets are annotated in a fps of 6 (UVO is 30) . As we can see, as the fps decrease from 6 to 1, the performance significantly drops. This is likely to be the consequence of learning temporal relations. With a sparse annotation, the displacement of objects between annotated frames become more significant, and thus increase the difficulty for queries to both segment the desired instances and track their trajectories. Table 3-3. Experiments on different dataset fps. | Method | Testing Dataset | Dataset FPS | AP | | ----------- | :-------------: | :---------: | :--: | | Mask2Former | YTVIS-19 | 1 | 45.5 | | TMT-VIS | YTVIS-19 | 1 | 50.8 | | Mask2Former | YTVIS-19 | 6 | 46.4 | | TMT-VIS | YTVIS-19 | 6 | 52.6 | ### Reference [1] Miao J, Wang X, Wu Y, et al. Large-scale video panoptic segmentation in the wild: A benchmark[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022: 21033-21043. [2] Li X, Zhang W, Pang J, et al. Video k-net: A simple, strong, and unified baseline for video segmentation[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022: 18847-18857. [3] Kim D, Woo S, Lee J Y, et al. Video panoptic segmentation[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2020: 9859-9868. --- Rebuttal Comment 1.1: Title: Thank you for the rebuttal Comment: Thank you for the rebuttal. My concerns are all well addressed in the rebuttal. I keep my original positive rating.
Summary: This article addresses video instance segmentation from a perspective of multi-dataset joint training. The proposed method is based on DETR, while the main contribution is to inject label taxonomy into model training based on CLIP. The model is evaluated on several standard benchmarks. Strengths: While multi-dataset training is not a new idea in such fields like object detection, the work is a pioneering study (as far as I know) to explore the idea in video instance segmentation. The paper is overall well written and easy to follow in most parts. The experiments are extensive. Weaknesses: I would not claim these as weaknesses, but suggestions that might help improve the work. First, while it is good to see that the method shows improvements, I would suggest not characterizing the results as "a such huge improvement". On one hand, a 2%-3% improvement is not huge. On the other hand, the improvements just meet the expectation since more data are used for training. In addition to show quantitative results, it is essential to report some other metrics, such as training time, to render a more comprehensive comparison. Second, as discussed in Sec. 2, there are other strategies for handling heterogeneous labels in multi-dataset joint training, and I am curious whether existing strategies can be directly used for video instance segmentation. If yes, how about the performance? Third, in terms of formulation, E_\infty is used in Sec 3.2, but does not appear in Eqs. 2-5. It is important to ensure consistency. Eq. 6 is not properly formulated as well; a same symbol is used for both input and output, i.e., $X_{l-1}$. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Beyond the questions in [weaknesses], I have one last question: is the model applicable to open-world scenarios? or at least can it generalize to other datasets that not involve in model training? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Authors tried to discuss limitations; however, the discussion provided is rather elementary and does not offer crucial insights into the method. More in-depth analysis should be provided. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: At the beginning, we want to thank you for the detailed, insightful and constructive comments. ### More metrics Thanks for your careful suggestions. Firstly, it’s true that performance will improve as data volume increases, but our method successfully alleviates the heterogeneity in category space of multiple VIS datasets, which leads to a better improvement than simply training data together. As for the characterization of "such a huge improvement", we will carefully revise them in the final version. We also tested the training time and model parameters with some current methods. We list the model parameters and FPS of SeqFormer[1] (220M/27.7), VITA[2] (247M/22.8), and our TMT-VIS (250M/21.4). Also, we compared our results with the previous multi-datasets joint training method UNINEXT[3], and the details are shown in Table 2-1. Table 2-1. Comparison with UNINEXT on YTVIS-19. | Method | Datasets Number | Training time | Backbone | AP | | ------------- | :-------------: | -------------- | :--------: | ---- | | UNINEXT | 8 | 3 days | ConvNext-L | 64.3 | | TMT-VIS(Ours) | 4 | 1 day 12 hours | Swin-L | 64.9 | ### Transfer multi-datasets joint training methods to VIS. We transfer existing image-level strategies on handling heterogeneous labels in multi-dataset joint training to VIS tasks based on Mask2Former-VIS, and results are shown in the table. We integrate the popular multiple-dataset object detection method, UniDet[4], and compare it to our design. UniDet firstly trains a single partitioned detector on multiple datasets with shared backbone, dataset-specific outputs and loss and then unifies the outputs of the partitioned detector in a common taxonomy completely automatically. Another popular multiple-dataset object detection method is OmDet[5], but it imposes restrictions on its own specific architecture. As the table shows, UniDet has a lower performance than our design (3.7 AP lower), given that it trains in two steps. Table 2-2. Experiments on popular multi-datasets joint training methods. | Method | Backbone | AP | | ------------- | --------- | ---- | | UniDet | ResNet-50 | 48.9 | | TMT-VIS(Ours) | ResNet-50 | 52.6 | ### Formulation. Thanks again for your careful suggestions. $E_{\infty}$ is actually corresponding to the $E_{1}$ in the Eqs.2-5, but we misuse the \mathcal grammar to $E_{1}$ so it turns out to be $E_{\infty}$. As for the X_{l-1}, in the final version we will add a ‘\prime’ to distinguish between the input and output of the different layers of TCM. We will revise the whole paper thoroughly and with more carefulness to ensure both consistency and properness of formulations. ### Open-world scenarios or other video datasets We update and reorganize Tab.3 in the original submitted paper, and the results are shown in the rebuttal pdf. The table contains the zero-shot performance of TMT-VIS. The results show the potentials of our model to open-world settings. We also tested our method on VIPSeg[6] based on the popular Video K-Net[7], the results are shown in Table 2-3. VIPSeg is a large-scale dataset for video panoptic segmentation (VPS)[8], a task which aims to simultaneously predict object classes, bounding boxes, masks, instance id associations, and semantic segmentation in video frames. There are a total of 3,536 videos with 84,750 pixel-wise annotated frames in VIPSeg, which covers 232 scenarios with 124 categories, including 58 things’ classes and 66 stuff’s classes. Here, VPQ and STQ are both metrics from the VPS task. With our additional design, we compiled taxonomic information and injected them to thing and stuff kernels, and trained YTVIS and VIPSeg jointly, with the total training epochs set to 12. The results are shown below, with our design, the performance of Video K-Net increases with a 2.3% VPQ improvement and 2.6% STQ improvement. This further demonstrates the transferability of our design in in-the-wild scenarios. Table 2-3. Results on VIPSeg Dataset. | Method | Multiple Dataset? | Backbone | VPQ | STQ | | ----------- | :---------------: | --------- | ---- | ---- | | Video K-Net | | ResNet-50 | 26.1 | 33.1 | | Video K-Net | √ | ResNet-50 | 27.6 | 34.8 | | TMT-VIS | √ | ResNet-50 | 28.4 | 35.7 | ### Reference [1] Wu J, Jiang Y, Bai S, et al. Seqformer: Sequential transformer for video instance segmentation[C]//European Conference on Computer Vision. Cham: Springer Nature Switzerland, 2022: 553-569. [2] Heo M, Hwang S, Oh S W, et al. Vita: Video instance segmentation via object token association[J]. Advances in Neural Information Processing Systems, 2022, 35: 23109-23120. [3] Yan B, Jiang Y, Wu J, et al. Universal instance perception as object discovery and retrieval[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2023: 15325-15336. [4] Zhou X, Koltun V, Krähenbühl P. Simple multi-dataset detection[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022: 7571-7580. [5] Zhao T, Liu P, Lu X, et al. Omdet: Language-aware object detection with large-scale vision-language multi-dataset pre-training[J]. arXiv preprint arXiv:2209.05946, 2022. [6] Miao J, Wang X, Wu Y, et al. Large-scale video panoptic segmentation in the wild: A benchmark[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022: 21033-21043. [7] Li X, Zhang W, Pang J, et al. Video k-net: A simple, strong, and unified baseline for video segmentation[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022: 18847-18857. [8] Kim D, Woo S, Lee J Y, et al. Video panoptic segmentation[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2020: 9859-9868. --- Rebuttal Comment 1.1: Title: thanks for the response Comment: Thanks for the detailed rebuttal and new experiments. Most of my concerns have been properly addressed. But I find Table 2-1 to be somewhat perplexing since the dataset numbers and underlying backbones are different. This makes it hard to make a proper comparison. In addition, I read other reviewers' comments and would like to discuss more following Reviewer 5w4z's `why video?` question. On one hand, the model shows favorable performance in the new `image` setting, i.e., a gain of +4% is undoubtedly promising. But does this outcome imply that the method's uniqueness is not exclusively tied to `video`? On the other hand, the newly added experiments coutine to rely on YTVIS-19. I am actually curious about results on datasets exclusively comprised of images like COCO and LVIS. While I don't anticipate fresh results at current stage, the question seems to be remain unanswered and I expect authors give more insights. --- Reply to Comment 1.1.1: Comment: We are sincerely grateful to you for the precious time and selfless efforts you have devoted to reviewing and commenting our paper. Our research is primarily anchored on Video Instance Segmentation (VIS), not image instance segmentation, and we are willing to accept all types of results—be it advancements or drawbacks—stemming from jointly training image datasets. We are exploring the usage of joint training methods in VIS training with a view to combine existing standalone and specific VIS datasets into a more exhaustive resource. Our motivation stems from the notable size difference between current image instance segmentation datasets and VIS datasets. As an example, LVIS hosts 160k images together with 2M instance mask annotations, thereby providing a richer variety of scenarios compared to VIS datasets. Conversely, VIS datasets are typically smaller and the resources available to us are largely fragmented, field-specific datasets rather than larger, established ones. Given that there's no prevalent joint training strategy for VIS and a lack of widely recognized large-scale VIS datasets, our proposition is to jointly train the available VIS datasets to enhance overall VIS performance. Table 1 Key statistics of Image Instance Segmentation datasets | | COCO | LVIS | | ---------- | ---- | ---- | | Images | 164K | 160K | | Categories | 80 | 1203 | | Masks | 1.2M | 2M | Table 2. Key statistics of multiple VIS datasets. | | YT19 | YT21 | OVIS | UVO | | ----------------- | ---- | ---- | ---- | ------ | | Videos | 2883 | 3859 | 901 | 11228 | | Categories | 40 | 40 | 25 | 81 | | Instances | 4883 | 8171 | 5223 | 104898 | | Masks | 131k | 232k | 296k | 593k | | Masks per Frame | 1.7 | 2 | 4.7 | 12.3 | | Objects per Video | 1.6 | 2.1 | 5.8 | 9.3 |
Rebuttal 1: Rebuttal: At the beginning, we want to thank all reviewers for the detailed, insightful and constructive comments. In the PDF file, we have attached the reorganized version of the ablation study on training with multiple VIS datasets, as well as the key statistics of these datasets. In this part, we want to address the novelty-related and the robustness-related concerns from reviewers. ### Novelty A major challenge in multiple-dataset joint training is the heterogeneity of multiple datasets’ category space. As a result of the difference in category space, though mask precision increases with the data volume, dataset biases might hinder models from generalization: simply utilizing multiple datasets will dilute the attention of models on different categories. Therefore, increasing the data scale and enriching label space while improving classification precision become a huge challenge for researchers. This phenomenon was shown in our ablation study. A straightforward method of enhancing classification precision is to aggregate category-related language embeddings into the modelling process, and the aggregation targets are queries to be more specific in DETR-based models. Previous multiple-dataset joint training methods, such as UNINEXT[1], adopt this idea and manage to fuse category-guided prompt with video features. However, aggregating all category-guided prompts introduces irrelevant semantic information. We argue that the total instances in an input video are far less than the size of the whole label space, and so that by filtering out some irrelevant categories, the aggregation of language embeddings becomes more effective and efficient. Thus, in our TCM, taxonomy embeddings are designed to interact with video features in order to unearth the potential taxonomy contained in each of the video frames. After this, we calculated the dot product between the different modulated taxonomic embeddings to predict the most relevant taxonomy in the given video. By aggregating these most relevant compiled taxonomic embeddings into query features though cross-attention and self-attention modules, we could have refined query features with not only richer but also condensed taxonomy information, in contrast with previous methods, which further improves the performance. ### Robustness of our method across various datasets It's true that there are degradation when jointly training UVO and YTVIS-19, but we argue that in most of the training settings TMT-VIS improves significantly better than Mask2Former (see the table in rebuttal PDF), and so this result can't indicate that our method lacks robustness. In the following, we try to explain the reason why this may occur. The drop in performance may be caused by the imbalance in scale among VIS datasets. YTVIS-19 takes advantage of an existing dataset called YouTube-VOS. OVIS is collected specifically for video instance segmentation in occluded scenes. UVO, on the other hand, adopts videos from Kinetics-400, which are human-centric and contain diverse sets of human actions and human-object interactions, and UVO is densely annotated with many more instances. The key statistics of these datasets are shown in the following table, which corresponds to the Table.1 of the supplementary file. As illustrated in the table of key statistics of datasets, the imbalance between these VIS datasets is significant. When jointly training YTVIS-19 with UVO, the model's parameters converge to fit UVO, which leads to the degradation of performance on validation on YTVIS-19. When jointly training YTVIS-19 and UVO with OVIS, the imbalance is alleviated, and results in a notable performance improvement. It’s worth noting that due to the scale of VIS datasets, especially YTVIS-19, the final results usually have a fluctuation of approximately 0.3AP, and this may also be the direct reason for the seeming drop of our TMT-VIS compared with Mask2Former-VIS because in other training settings our TMT-VIS significantly outperforms Mask2Former-VIS. #### Reference [1] Yan B, Jiang Y, Wu J, et al. Universal instance perception as object discovery and retrieval[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2023: 15325-15336. Pdf: /pdf/ee3757906006968fba953c6a7260b21b8bce8109.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: In this paper, the authors propose a multi-dataset joint training method for the video instance segmentation task. They build on the MaskFormer-VIS model, introducing two additional modules: the Taxonomy Compilation Module and the Taxonomy Injection Module. And the proposed method shows compelling performance. Strengths: S1. This paper tackles an overlooked problem: multi-dataset joint training for the video instance segmentation task. Weaknesses: W1. A significant concern I have is with the explicit relevance and motivation of the proposed methodologies for the task of "video" instance segmentation. The paper currently lacks clear justification as to why these particular methods are designed and show impressive performance specifically for this task. It would benefit from a more in-depth exploration and justification of why these methodologies are uniquely suited for "video" instance segmentation. W2. The paper needs an overall improvement in writing quality. The related works section contains several inaccuracies that need rectification. To specify: - L83: The claim that Mask R-CNN [13] and MinVIS [18] use a tracking branch is incorrect. - L89: It is incorrect to label [47, 28] as MOTS works; they are, in fact, MOT methods. - L90: The assertion that IDOL [39] is based on GenVIS [15] is inaccurate. Additionally, there's a misleading terminology used for the proposed module on L296; it should be the Taxonomy Compilation Module (TCM), not the Taxonomy Extraction Module (TEM). W3. The paper seems to possess marginal novelty. In particular, as stated in L116-117, the proposed method leverages taxonomic embedding (as I understand, these are embeddings from the VLM), rather than language embeddings. The distinction of this work compared to previous multi-dataset joint training methods isn't clear. It would be advantageous if the authors could provide a more explicit elaboration on the unique novelty of their work. Technical Quality: 1 poor Clarity: 1 poor Questions for Authors: Major concerns are raised in the weakness section. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 1 poor Presentation: 1 poor Contribution: 1 poor Limitations: This paper has discussed its limitations and societal impact in the main paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: At the beginning, we want to thank you for the detailed, insightful and constructive comments. ### Uniqueness for "video" instance segmentation. When injecting the taxonomic embeddings, we add a spatio-temporal adapter to generate video-specific modulated taxonomic embeddings. This approach is not only parameter-efficient but also adapts the output text embeddings from text encoder to fit the scenarios of multiple frames. Thus, the modulated taxonomic embeddings are able to interact with the frame features across temporal dimensions through cross-attention and FFN operations. Since the difficulty of VIS tasks lies in segmenting while modeling the trajectories, which grows in polynomials along with the number of video frames, simply using queries from transformer decoder to gradually refine can be troublesome and ineffective. On the other hand, injecting modulated taxonomic embeddings as guidance can provide a prior for transformer queries, and because the modulated embeddings are filtered to possess the most possible categories’ semantic information, such injection can help queries to converge to the desired instances faster and model instances' trajectories more precisely. When we downsample the YTVIS video clips to only 1 frame per video, the video instance segmentation has shrunk back to image instance segmentation. we could notice that our design provides less improvement in such an image-level setting as with higher fps. The results are reasonable since the spatio-temporal adapter contracts to provide only spatial adjustment. With the backbone trained on COCO images, the strong segmentation backbone provides a great segmentation capacity in image-level, making the final results higher than training on the original YTVIS-19. This further illustrates that our design is specifically designed for "video" instance segmentation. Table 1-1. Experiments on different frames per video. | Method | Testing Dataset | Frames Per Video | AP | | ----------- | :-------------: | :--------------: | ---- | | Mask2Former | YTVIS-19 | 1 | 49.2 | | TMT-VIS | YTVIS-19 | 1 | 53.5 | | Mask2Former | YTVIS-19 | ~30 | 46.4 | | TMT-VIS | YTVIS-19 | ~30 | 52.6 | ## Overall writing Thanks for your careful suggestions about the misplacement of citations in related works as well as the terminologies of the `Taxonomy compilation module'. We will revise the whole paper thoroughly and with more carefulness. To clarify some of the mistakes in our proposed version: MaskTrack R-CNN is the baseline method of online VIS method, which is built by embedding a simple tracking branch to Mask R-CNN. MinVIS implements a strong query-based image instance segmentation model (Mask2Former) on individual frames and associate query embeddings by bipartite matching. Based on Deformable-DETR, IDOL introduces a contrastive learning head that acquires discriminative instance embeddings for association. And as for the MOTS method mentioned in related works, the baseline method is TrackR-CNN. It extends the Mask R-CNN with three-dimensional convolution to combine contextual information and deploys association head to extract instance embedding for data association. Another important SOTA method is PointTrack, which performed the tracking-by-instance segmentation paradigm. It first obtains high-quality instance segmentation results with spatial embedding, and then extracts instance features from the segmentation results. As for the misuse of terminologies, we want to again deeply apologize for the carelessness. ### Novelty of TMT-VIS. A major challenge in multiple-dataset joint training is the heterogeneity of multiple datasets’ category space. As a result of the difference in category space, though mask precision increases with the data volume, dataset biases might hinder models from generalization: simply utilizing multiple datasets will dilute the attention of models on different categories. Therefore, increasing the data scale and enriching label space while improving classification precision become a huge challenge for researchers. This phenomenon was shown in our ablation study. The UNINEXT[1] is the first DETR-based method which jointly trains multiple VIS datasets. It simply utilizes BERT language encoder to generate language embeddings of categories from all video datasets, and fuse the information with visual embeddings through a simple bi-directional cross-attention module. However, UNINEXT has no video-specific design, and it doesn't use the language embeddings to predict a set of possible set of categories, so the semantic information of VIS categories are simply aggregated without further operations. Our method, on the other hand, adjusts the language embeddings via a taxonomy compilation module which consists of a spatio-temporal adapter and a FFN network, as well as of a taxonomy injection module which fuses the taxonomy information into video queries with a multi-level cross-attention and self-attention layers. In the taxonomy compilation module, taxonomy embeddings are designed to interact with video features in order to unearth the potential taxonomy contained in each of the video frames. After this, we calculated the dot product between the different modulated taxonomic embeddings to predict the most relevant taxonomy in the given video. By compiling and injecting the possible taxonomic information to queries as guidance, queries tend to converge to the desired instances faster and finally come up with better precision. ### Reference [1] Yan B, Jiang Y, Wu J, et al. Universal instance perception as object discovery and retrieval[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2023: 15325-15336. --- Rebuttal Comment 1.1: Comment: Dear Reviewer 5w4z, We are sincerely grateful to you for the precious time and selfless efforts you have devoted to reviewing our paper. Since the deadline of discussion is approaching, we would like to inquire whether our response has addressed your concerns and if you have the time to provide further feedback on our rebuttal. We are more than willing to engage in further discussion. Best regards, Authors of paper 11160 --- Rebuttal Comment 1.2: Title: Response Required Comment: Dear Reviewer 5w4z, Given you are in the minority with a reject it is critical that you read the rebuttal and inquire about any outstanding issues you feel are still not addressed. Thank you, AC
null
null
null
null
null
null
OpenMask3D: Open-Vocabulary 3D Instance Segmentation
Accept (poster)
Summary: This paper focuses on the task of open-vocabulary 3D instance segmentation, which involves predicting 3D object instance masks and their corresponding categories. The authors highlight the limitations of traditional closed-vocabulary approaches that operate within a predefined set of object categories, which restricts their ability to handle novel objects and free-form queries. To address these limitations, the authors propose OpenMask3D, a zero-shot approach for open-vocabulary 3D instance segmentation. OpenMask3D utilizes predicted class-agnostic 3D instance masks and performs mask-feature aggregation using CLIP-based image embeddings. This allows the model to reason beyond pre-defined concepts and handle open-vocabulary queries. This paper demonstrates its superiority over closed-vocabulary counterparts. The proposed method has the potential to enhance various applications such as robotics, augmented reality, scene understanding, and 3D visual search. Strengths: The authors propose OpenMask3D as the first zero-shot approach for open-world 3D instance segmentation, offering a unique solution that leverages predicted class-agnostic 3D instance masks and CLIP-based image embeddings. This combination of ideas and techniques demonstrates originality in problem formulation, model architecture, and feature aggregation for open-vocabulary 3D instance segmentation. The methodology is presented in a clear and structured manner, allowing readers to understand the proposed approach easily. The experimental evaluation is conducted on the common benchmark datasets, and the authors perform ablation studies to gain insights into the design choices of the model. The results demonstrate that OpenMask3D outperforms other open-vocabulary counterparts, particularly in scenarios with a long-tail distribution of objects, which advance the state-of-the-art 3D instance segmentation in the complex real world. Weaknesses: While the paper demonstrates several strengths, there are also some areas that could be improved to further enhance its contributions and impact. Evaluation on a Single Dataset: The paper conducts experiments and ablation studies on the ScanNet200 dataset, which may limit the generalizability of the findings. It would be beneficial to evaluate the proposed OpenMask3D method on multiple datasets, including those with varying object distributions, to assess its robustness and performance across different scenarios. A more diverse evaluation would strengthen the claims made in the paper. Long Pipeline: another weakness of this work is the long pipeline of the proposed OpenMask3D model. The pipeline consists of multiple stages, including a class-agnostic mask proposal head, mask-feature aggregation module, and multi-view fusion of CLIP-based image embeddings. The length of the pipeline may introduce computational complexity and potentially impact the efficiency and real-time applicability of the model. Lack of Computation Cost Comparison: While the paper discusses the architecture and methodology of OpenMask3D in detail, it does not provide a thorough comparison of the computational resources required and the speed achieved between different methods. Problematic Structure of section 4.1: sec 4.1 is named as "Closed-vocabulary 3D instance segmentation evaluation", while both closed-set instance segmentation results and open-vocabulary instance segmentation results are presented in this section. Technical Quality: 3 good Clarity: 3 good Questions for Authors: The paper primarily evaluates the proposed OpenMask3D method on the ScanNet200 dataset. Evaluation on more datasets with diverse scenes should be performed. It would be beneficial to include a comprehensive comparison of the computational resources and inference speed between OpenMask3D and existing closed-vocabulary and open-vocabulary 3D instance segmentation methods. The structure of Experiment section should be better organized. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: The paper briefly mentions limitations of OpenMask3D. It would be valuable to provide a more thorough analysis of the limitations, and discuss potential method / direction that could be explored to solve it. Potential negative societal impact is not applicable for this work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the valuable feedback, and aim to address the questions in the following: ### Q1: Evaluation on more datasets and settings The reviewer suggests evaluating on other datasets to show the generalizability of OpenMask3D. We agree and additionally run experiments on Replica, besides ScanNet200. For the OpenScene baselines, we use the pre-computed fused features from the OpenScene github repository. Scores are shown in the table below. The results show that our approach generalizes well to new datasets, we obtain comparable scores as on ScanNet200. | Method | ScanNet200 (mAP$_{50}$) | Replica (mAP$_{50}$| | --- | --- | --- | | OpenScene (2D Fusion) | 15.2 | 15.6 | | OpenScene (3D Distill) | 6.2 | 10.5 | | OpenScene (2D/3D Ens.) | 6.7 | 10.4 | | OpenMask3D (Ours) | 16.8 | 18.4 | We additionally provide an experiment to analyze the generalization capability of our approach beyond the training categories. We use class-agnostic masks from a mask-predictor trained on the 20 original ScanNet classes, and evaluate on the ScanNet200 dataset. As we intend to assess how well our model performs on “unseen” categories, we categorize objects into two subsets: *base* and *novel* classes. We identify ScanNet200 categories that are semantically close to the original ScanNet20 classes (folded-chair, table, dining table, cabinet, kitchen cabinet, bathroom counter etc.), resulting in 53 categories. Below, we report results on seen (“base”) classes, unseen (“novel”) classes, and all classes. Full table is provided in Tab. 1 of the rebuttal PDF. |Method|Novel (AP$_{50}$)|Base (AP$_{50}$)|All (AP$_{50}$)| |---|---|---|---| | OpenScene (2D Fusion) |10.3|**15.0**|11.6| | OpenScene (3D Distill) |2.3|13.4|5.3| | OpenScene (2D/3D Ens.) | 2.8| 13.7|5.8| | OpenMask3D (Ours) |**12.9**| **15.0**| **13.5**| ### Q2: Comparison of Computational Resources We compare the necessary computational resources of the original OpenScene from [46] and our OpenMask3D. We report the runtimes and GPU memory requirements. ****1) Runtime**** In the table below, we provide the runtime of our approach on ScanNet for generating the queryable 3D scene representation, and also for performing a query on that representation. For comparison, we additionally provide the runtimes of the original OpenScene [46]. All runtimes are measured on the same hardware. | Method | Generating 3D Scene Representation [s] | Querying 3D Scene Representation [ms] | Semantics? | Instances? | | --- | --- | --- | --- | --- | | OpenScene (2D Fusion) | 440 | 168.8 | $\checkmark$| ✘ | | OpenScene (2D3D Ensemble) | 524 | 168.8 | $\checkmark$| ✘ | | OpenMask3D | 556 | **0.923** | $\checkmark$| $\checkmark$ | | OpenMask3D (fast) | **350** | **0.923** | $\checkmark$| $\checkmark$ | We also report the runtime of a faster variation of our model (called ‘fast’) which runs CLIP only once per segment on a single crop and uses a smaller SAM backbone. The exact hyper-parameters are shown in the table below. This faster version performs almost as good as the original model (small performance drop of -1.3 mAP$_{50}$). | Model | top k | $\mathsf{L}$ | SAM backbone | | --- | --- | --- | --- | | OpenMask3D | 5 | 3 | ViT-H| | OpenMask3D (fast) | 1 | 1 | ViT-B| ****2) Minimum GPU Memory Requirement to run the Model**** In the table below we indicate the minimum GPU memory requirements for OpenScene and OpenMask3D. OpenScene has much stronger GPU requirements since it depends on OpenSeg. | Method | GPU memory | main components | |---|---|---| | OpenScene | 32 GB | OpenSeg: 32GB, CLIP-text: 4GB, OpenScene: 32GB| | OpenMask3D | 10 GB | SAM: 8GB, CLIP-image: 4GB, CLIP-text: 4GB | | OpenMask3D-fast | 4 GB | SAM: 4GB, CLIP-image: 4GB, CLIP-text: 4GB| ### Q3: Structure of Section 4.1 The reviewer highlights the interleaved presentation of both open-set and closed-set results in this section. This is correct and we will separate both settings more clearly to improve the organization of the experiment section. ### Limitations We agree with the reviewer that our paper would benefit from discussing the limitations further, and we will extend our section (L320-323) to include a discussion on class-agnostic mask quality as also discussed in (L294-308). Another limitation we would like to mention is that current top-k view selection algorithm directly relies on the visibility of an object instance in each frame - and this can benefit from additional criteria to assess the *quality* of a frame so that more informative frames could be selected. We have observed that frame-selection plays an important role, particularly for avoiding scene context to be overly infused into our per-mask features. Lastly, we also agree that a systematic approach is needed to evaluate open-vocabulary performance quantitatively, and call for future work in this direction. --- Rebuttal Comment 1.1: Comment: The response solves my concern, I keep my rating of 5.
Summary: The paper proposes to perform open-vocabulary instance segmentation by utilizing class-agnostic 3D instance segmentation masks from a 3D instance segmentation model trained on scannet200 and generating class labels for it using CLIP. The paper proposed to first obtain class-agnostic instance masks from a supervised Mask3D model (without using class annotations). Then it finds the images where the objects are best visible. It then projects the 3D segmentation masks in those views and refine them using SAM. Finally they average features for each object from multiple views and at multiple scale to arrive at one feature vector per 3D instance mask. The class label can then be obtained by doing dot-product with the obtained feature vector and the language embedding of the class. The results show that the proposed method is superior than prior point-based methods when also supplied with class-agnostic instance segmentation mask in closed-vocabulary setting. The paper also shows some qualitative results with free-flowing natural language. Strengths: - The paper is well written, I especially appreciate the helpful supplementary video and text content which made the nitty gritty details of the pipeline very clear. - Open-Vocabulary 3D instance segmentation is a very useful task which hasn't been tackled before -- this paper brings attention to it (that said, I have some concerns here as mentioned in weaknesses) - The proposed method obtains better results than prior point-based methods like OpenScene Weaknesses: - Open-Vocabulary: The proposed method relies on 3D instance segmentation predictions which in turn relies on 3D segmentation annotations. This, however, is not available for wide variety of objects, hence I am unsure if we can conclude that this model is indeed “open-vocabulary”. For example: If instead of scannet200, the proposed model uses class-agnostic masks from a model trained on 20 scannet classes, would it be able to achieve decent results on Scannet200? Would this model trained on scannet work on a different dataset like MatterPort3D? Additionally, since 3D datasets are significantly smaller than their 2D counterparts, doesn’t relying on 3D instance segmentation mask a serious bottleneck which wouldn’t scale? The point-based models are open-vocabulary in the sense that they are not bottlenecked by any 3D-specific annotations — at the same time I do agree with the point of this paper that they can only do semantic and not instance segmentation. However, using instance masks via 3D annotations might not as well lead to “open-vocabulary instance segmentation” proposed in this paper. Ideally, it should primarily have results on held out 3D categories that the model or any of its components have never seen during training. - In continuation of the above, the results for 3D instance segmentation in open-vocabulary setting is only qualitative and not quantitative. I understand though that prior methods too show only qualitative open-vocabulary results, and so this is said as a minor point and not a major complaint. However, the lack of quantitative results on categories outside the training data is concerning — especially since this proposed model particularly used labels from scannet which may not generalize beyond the 200 categories they were trained on, and on out-of-domain 3D scenes. - Unfounded Claims: - L304: The paper highlights that when given access to oracle masks at “test time” to their model, it outperforms the supervised Mask3D model on tail AP by 9.1%. As the paper concludes in L305-308, this result indicates that if somehow we are able to obtain high quality class-agnostic masks, we do not need supervision for class labelling as their method can outperform supervised Mask3D. In my opinion, this is a misleading claim because while their baseline “Mask3D” has access to ground truth masks and classes during training, it does not have access to oracle masks during test time. This makes the comparison unfair. AP is very sensitive to quality of the segmentation mask and if supplied oracle masks to Mask3D it may do much better. A very crude way to supply that would be — computing Hungarian matching between the predicted masks and ground truth masks (without class labels), and for each predicted mask replace it with the matched ground truth mask while keeping the class label same. In general though, this comparison of supervised mask3d and the proposed method needs much more care to make balanced conclusions. - (Minor) L290-291: Claims that “the ablation study show that effect of 2D mask segmentation is less significant than the effect of multi-scale cropping”. Based on this, one might expect the row 2 of component analysis section to be (significantly) better than row 3. However, on some metrics row 2 wins while on others row 3 wins. And the difference is not that much. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - As mentioned in the limitations, here are some suggestions which might help alleviate the concerns over instance-segmentation annotation bottleneck: - Does class-agnostic segmentation generalize beyond training categories? Performance on OOD dataset like Matterport3D and maybe instance segmentation model trained on scannet 18 classes and tested on scannet 200 class could help (compared against point-based methods). Another suggestion could be to hold out rare categories from Scannet200 for training class-agnostic instance segmentation model and evaluate on the held out categories at test time. Ofcourse, these are just suggestions and any other experiment that could help us get to the bottom of this will be highly appreciated. - Answers to "unfounded claims" as described in limitations would be super helpful too. - Could you give some insights on why is the performance of OpenScene 2D Fusion model so much better than 2D-3D ensemble models while the Openscene paper consistently showed better results with the ensemble version of their model? That was a bit strange to me. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: The paper do not discuss limitations. I think the biggest one is their reliance on class-agnostic instance segmentation mask annotations which would be good to discuss in the ppaer. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the extensive feedback, we really appreciate the helpful suggestions for experimental setups! ### Open-vocabulary Evaluation & Generalization Beyond Training Categories The reviewer correctly highlights that Mask3D is trained on closed-set segmentation masks dataset, and is concerned that the class-agnostic Mask3D would not generalize beyond the masks seen during training. This is indeed a valid concern, and deserves further analysis. However -- as our additional experiments below indicate -- in practice, the model still manages to generalize quite well beyond the object masks seen during training. To demonstrate the capability of our approach beyond the mask-predictor training categories, we conducted a series of experiments following the suggestions by the reviewer. **Generalization to unseen categories** First, we analyze how well our model would perform if we use class-agnostic masks from a mask-predictor trained on the 20 original ScanNet classes, and evaluate on the ScanNet200 dataset. To evaluate how well our model performs on “unseen” categories, we classify the ScanNet200 labels into two subsets: *base* and *novel* classes. We identify ScanNet200 categories that are semantically similar to the original ScanNet20 classes (e.g. chair and folded-chair, table and dining-table, etc.), resulting in 53 classes. We group all remaining object classes that are not similar to any class in ScanNet20 as "novel". Below, we report results on seen (“base”) classes, unseen (“novel”) classes, and all classes. The full table is provided in Tab. 1 of the rebuttal PDF. |Method|Novel (AP$_{50}$)|Base (AP$_{50}$)|All (AP$_{50}$)| |---|---|---|---| | OpenScene (2D Fusion) |10.3|**15.0**|11.6| | OpenScene (3D Distill) |2.3|13.4|5.3| | OpenScene (2D/3D Ens.) | 2.8| 13.7|5.8| | OpenMask3D (Ours) |**12.9**| **15.0**| **13.5**| Our experiments show that the model trained on a smaller set of object annotations from ScanNet20 can generalize to predict object masks for a significantly larger set of objects (ScanNet200), resulting in only a marginal decrease in the performance. Particularly, we see that OpenMask3D, compared to other open-vocabulary counterparts - seem to better preserve information about uncommon objects. **Generalization to OOD data** Furthermore, we show results on out-of-distribution data from *Replica*, using a mask predictor trained on ScanNet. For the OpenScene baselines, we use the pre-computed features from the OpenScene repository. Replica dataset contains high-quality *mesh* reconstruction of indoor scenes, and RGB-D images rendered from these meshes. In order to assess the robustness of our CLIP-based mask-feature module to image-quality and realism, we conduct a second experiment where we render RGB-D images from the *point clouds* of Replica scenes (marked “rendered RGB-D” below, illustrated in Fig. 1.b, rebuttal PDF). |Method|AP|AP$_{50}$|AP$_{25}$| |---|---|---|---| |OpenScene (2D Fusion)|10.9|15.6|17.3| |OpenScene (3D Distill) |8.2|10.5|12.6| |OpenScene (2D/3D Ens.)|8.2|10.4|13.3| |OpenMask3D (rendered RGB-D) |11.6|14.9|18.4| | OpenMask3D |**13.1**|**18.4**|**24.2**| Our results demonstrate that OpenMask3D can indeed generalize to unseen categories as well as OOD data. Nevertheless, we understand and agree with the reviewer’s concern in general, however, the above experiments seem to indicate that this is less of a problem than one might initially think. In particular, the mask predictor module trained on a smaller set of objects seems to perform reasonably well in various settings. Furthermore, several qualitative examples we provide in our submission show that our method can achieve good “open-vocabulary” results for objects that were not originally annotated e.g. Fig.1 "angel", Fig.4 "pepsi" - and more in the supplementary. ### Oracle Mask Experiment The question is about an experiment from the paper (Tab. 3) showing that OpenMask3D when given access to oracle masks has the potential of outperforming supervised Mask3D model on tail categories by 9.1% AP. The reviewer raised a concern about the fairness of our experiment, stating that Mask3D does not have access to oracle masks in test time unlike the open-vocabulary approaches presented in Tab. 3. We understand the concern, and we regret our unintendedly strong wording regarding the claim in L304-308. To address this, we conduct the suggested experiment in which we supply oracle masks to Mask3D. We perform Hungarian matching between the predicted masks and oracle masks discarding all class-losses, and only match based on the masks. For each oracle mask, we assign the class label of the matched mask from Mask3D. Our results are below: |Method|AP|head (AP)|common (AP)|tail (AP)| |---|---|---|---|---| |Mask3D (Hungarian M., oracle)|35.5|55.2|27.2|22.2| |OpenMask3D (oracle)|23.4|24.6|19.3|27.0| Even when we supply Mask3D with oracle masks, our approach surpasses the Mask3D performance on the tail categories by $+4.8$ AP. While these findings in fact confirm our initial claim, we will revise our text to make more careful conclusions. ### Performance OpenScene 2D Fusion vs. 2D/3D Ensemble The reviewer points out that OpenScene 2D performs better than the 2D/3D ensemble variant, which is not in line with the findings in OpenScene [46]. We also noticed this and show visualizations in Fig. 1.c of the rebuttal PDF: since the ensemble is a point-wise operator it seems to add inconsistent noise which might be the reason for the reduced performance. ### Limitations We agree with the reviewer that our paper would benefit from further discussing limitations, and we will extend our section (L320-323) to include a discussion on class-agnostic mask quality as also discussed in (L294-308). Although our findings above demonstrate that the 3D annotation bottleneck might not be as severe as initially thought, we understand that it still plays an important role and will be added to the limitations. --- Rebuttal Comment 1.1: Comment: Thank you so much for all your hard work and a thorough response! Generalization Experiments: I really appreciate these experiments and they resolve my concerns. For the scannet200 experiment, I am not sure though that this claim is well-supported: " Particularly, we see that OpenMask3D, compared to other open-vocabulary counterparts - seem to better preserve information about uncommon objects." since I think both OpenMask3D and OpenScene (2D fusion) see a similar drop i.e. around 3.3-3.6 in terms of mAP50 (All). Additionally, it might be useful to also add an additional row to Table-1 of rebuttal with a version where you DO have access to mask labels in Scannet 200 during training (the results you report in Table-1 of main paper). I think that would more easily illustrate the performance loss without access to ground truth mask labels. Thank you for fixing the oracle mask experiment! Overall, I would like to thank the authors for their effort; all my concerns are addressed. after seeing the rebuttal, I am leaning towards accept. I will update my ratings after the discussion with other reviewers. --- Reply to Comment 1.1.1: Comment: Thank you very much for your feedback, we are really happy to hear that we were able to address your concerns! Generalization Experiments: We are glad these experiments resolved the concerns. Regarding the claim "OpenMask3D, compared to other open-vocabulary counterparts - seem to better preserve information about uncommon objects.", we would like to clarify that what we originally meant was that OpenMask3D seems to perform better compared to other open-vocabulary counterparts in *this evaluation setup* as it achieved generally higher scores. However, we also agree that both methods see a similar drop in terms of AP50, and we will rephrase this statement to limit its scope to what we observe in this particular table. Furthermore, we agree that it would be helpful to extend Table 1 of the rebuttal with a row describing the results we reported in Table 1 of the main paper, to better illustrate the comparison between different experimental setups. Once again, thank you for your quick response and additional feedback! In the remaining discussion period, we would be very glad to address any additional questions that may arise, or any clarifications needed.
Summary: This paper presents a method for 3D open-vocabulary instance segmentation. It proposes to use a class-agnostic Mask3D to get some instance mask proposals, project the instance points to 2D views to get some 2D segments, and extract some instance features based on the 2D segments by CLIP image encoder. Then, text descriptions can be used as queries to find the best instance proposals in an open-vocabulary way by comparing the similarities with the instance mask features. Strengths: 1. The paper is well organized with good figures and easy to understand. 2. The paper is the first method for open-vocabulary instance segmentation in 3D scenes. 3. The experiments and analysis in the paper show the great zero-shot ability of the method. Weaknesses: 1. The method relies on RGB-D images, and thus cannot generalize to 3D scenes (e.g., 3D scenes collected by LiDAR) without 2D images. 2. The experiments are conducted only on the ScanNet dataset. Experiments on more datasets like the Matterport3D used in OpenScene are expected to prove the generalizability of the method. 3. The inference speed of the whole framework seems slow as the method uses some heavy models (e.g., SAM and CLIP) for multiple views. SAM model is even used multiple iterations for each mask. A comparison of the inference time between this method and OpenScene is expected. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: Please refer to the weaknesses. More questions are listed below: 1. More details of training a class-agnostic Mask3D are expected, for example, the scores used in the Hungarian matching process and the losses used in training. 2. The paper will be better if there is a discussion of the domain gap between the cropped image and the images used to train CLIP. Also, considering the domain gap, a detailed ablation of the hyperparameters in multi-scale cropping is expected. 3. The equations in L212 seem not accurate. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: Some of the limitations are discussed in the last paragraph of the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the feedback. We are happy that the reviewer found our paper well-organized and easy to understand, and appreciated the zero-shot ability of our open-vocabulary 3D instance segmentation approach. Below, we hope to answer the questions raised by the reviewer. ### W1/W2: Generalization to other datasets & 3D scenes without 2D images To assess the generalization ability of our method to other datasets, we share additional results on the Replica dataset. For the OpenScene baselines, we use the pre-computed features from the OpenScene repository. We provide qualitative examples of open-vocabulary queries from our method in Fig. 1.d of the rebuttal PDF. Our approach outperforms other open-vocabulary counterparts on the Replica dataset as shown below: |Method|AP|AP$_{50}$|AP$_{25}$| |---|---|---|---| |OpenScene (2D Fusion)|10.9|15.6|17.3| |OpenScene (3D Distill) |8.2|10.5|12.6| |OpenScene (2D/3D Ens.)|8.2|10.4|13.3| | OpenMask3D |**13.1**|**18.4**|**24.2**| Furthermore, the reviewer highlights that our approach requires images as input. This is correct, and it is due to the fact that OpenMask3D, like OpenScene, depends on visual-language models that operate on images in combination with text. In this work, we prioritized the ability to recognize uncommon/long-tail objects over generalization across different modalities. Using vision-language models on images provides an excellent opportunity to preserve this generalization capability. Nevertheless, we would like to state that when only a 3D scan of a scene is available, it could still be possible to render images from the 3D scan. We tried this on Replica dataset, and rendered RGB-D images from the scene point clouds, which is illustrated in Fig. 1.b of the rebuttal PDF. With this approach, we obtained the scores shown in the table below, marked "rendered RGB-D". |Method|AP|AP$_{50}$|AP$_{25}$| |---|---|---|---| |OpenMask3D (rendered RGB-D) |11.6|14.9|18.4| | OpenMask3D |**13.1**|**18.4**|**24.2**| Overall the performance does decrease when using images rendered from the point cloud, but only by -1.5 AP. Yet, we agree that when no color images are available or when the scan is sparse (LiDAR) it might not be possible to easily render color images. ### W3: Runtime In the table below, we provide the runtime of our approach on ScanNet for generating the queryable 3D scene representation, and also for performing a query on that representation. For comparison, we additionally provide the runtimes of the original OpenScene [46]. All runtimes are measured on the same hardware. | Method | Generating 3D Scene Representation [s] | Querying 3D Scene Representation [ms] | Semantics? | Instances? | | --- | --- | --- | --- | --- | | OpenScene (2D Fusion) | 440 | 168.8 | $\checkmark$| ✘ | | OpenScene (2D3D Ensemble) | 524 | 168.8 | $\checkmark$| ✘ | | OpenMask3D | 556 | **0.923** | $\checkmark$| $\checkmark$ | | OpenMask3D (fast) | **350** | **0.923** | $\checkmark$| $\checkmark$ | We also report the runtime of a faster variation of our model (called ‘fast’) which runs CLIP only once per segment on a single crop and uses a smaller SAM backbone. The exact hyper-parameters are shown in the table below. This faster version performs almost as good as the original model (small performance drop of -1.2 AP). | Model | top k | $\mathsf{L}$ | SAM backbone | | --- | --- | --- | --- | | OpenMask3D | 5 | 3 | ViT-H| | OpenMask3D (fast) | 1 | 1 | ViT-B| |Method|AP|AP$_{50}$|AP$_{25}$| |---|---|---|---| |OpenMask3D (fast) |11.9|17.1|23.3| | OpenMask3D |**13.1**|**18.4**|**24.2**| ### Q1: Class-Agnostic Mask3D training details Class-Agnostic Mask3D closely follows Mask3D but ignores the semantics - main difference is how we discard class label and class confidence-based filtering stage. In the supplementary material Sec. 1.1 we provide a detailed explanation. We will extend the description in the main text to include further implementation details. ### Q2: Discussion on the domain gap & ablation on multi-scale cropping parameters We agree with the reviewer, and provide an ablation study regarding the hyperparameters used during the multi-scale cropping phase on the Replica dataset. In the table below, we ablate results both based on number of levels, and ratio of expansion between levels: | Levels | Ratio of Expansion|AP | AP50| AP25| | --- | --- | --- |--- |--- | | 1 | 0.1 |11.3 |16.0|20.2| | 3| 0.1 | **13.1**|**18.4**|**24.2**| | 5| 0.1 |12.8|17.6|22-6| | 3| 0.05| 12.9 |18.1 |23.5 | | 3| 0.1 |**13.1**|**18.4**|**24.2**| | 3| 0.2| 12.8 | 17.7 | 22.9 | Regarding the domain gap in w.r.t. CLIP, we also would like to highlight our "rendered RGB-D" experiment on Replica discussed earlier, which we believe highlights the robustness of our CLIP-based mask-feature module to image-quality and realism. ### Q3: Equations in L212 The reviewer noticed a problem with the equations, we updated them as follows: - $x_1^l = \max(0, x_1^1-(x_2^1-x_1^1)\cdot k_{exp} \cdot l)$ - $y_1^l = \max(0, y_1^1-(y_2^1-y_1^1)\cdot k_{exp}\cdot l)$ - $x_2^l = \min(x_2^1+(x_2^1-x_1^1)\cdot k_{exp}\cdot l, W-1)$ - $y_2^l = \min(y_2^1+(y_2^1-y_1^1)\cdot k_{exp}\cdot l, H-1)$ --- Rebuttal Comment 1.1: Comment: Dear reviewer cqVz, As the discussion phase ends today we will not be able to further clarify potential additional concerns. We would be very grateful if you could respond to our rebuttal and offer us an opportunity to further engage with your concerns and address any additional questions you might have! Thank you for your time and feedback! Best, Authors
Summary: This paper addresses the problem of text-based 3D instance segmentation. To tackle this problem, the authors propose OpenMask3D, a zero-shot approach for 3D instance segmentation that utilizes class-agnostic 3D instance masks and multi-view fusion of CLIP-based image embeddings to aggregate per-mask features. The model's performance is evaluated through experiments and ablation studies on the ScanNet200 dataset, where it outperforms OpenScene in some metrics. Strengths: **Clarity and quality**: The paper is well written and explains all the components with clarity and in detail. The figures are excellent. The results are very neatly presented and overall the paper is engaging to read. **Empirical Evaluation**: The paper includes extensive empirical evaluations on the ScanNet200 dataset. The paper not only demonstrates the performance of OpenMask3D but also presents ablation studies on the number of frames used, the use of 2d masks and multi-scale crops to understand the contribution of these components. **Relevance**: The work addresses a highly relevant challenge in computer vision and autonomous systems. As these technologies become increasingly prevalent, the ability to understand and interact with a diverse range of objects becomes critically important, underscoring the relevance of this research. Weaknesses: **Novelty of the task**: The paper makes strong claims about being the first to introduce "open-vocabulary 3D instance segmentation". However, given OpenScene and others I don't think it can be claimed that the problem of text-based 3D instance segmentation is novel. OpenScene might not have used the word "instance" but it does show results for "Open-vocabulary 3D object search" which is similar in meaning. **Significance of the Contribution**: It is true that most existing segmentation approaches have relied on a closed-set of objects in 3D annotated datasets and thus a system that can perform open-vocabulary segmentation would be quite a significant contribution to the field of 3D scene understanding. However, the technical contribution of the proposed approach is somewhat limited considering its similarity to OpenScene. The main differences seem to be the frame selection and feature aggregation strategy, but if this is the case I would expect these components to be more prominently handled in the paper (rather than claiming a whole novel task). Technical Quality: 3 good Clarity: 3 good Questions for Authors: - Please clearly outline the differences compared to OpenScene - The examples given in the paper are mainly for common objects "armchair", "seat", "pool", "sofa". Can you give an example where the method identifies objects that are truly from an "open" vocabulary (i.e. not common)? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for providing valuable feedback, and we are really glad that the reviewer found our paper enjoyable to read! We address the questions and concerns in the following: ### Q1: Differences between OpenMask3D (instance segmentation) and OpenScene (semantic segmentation) The reviewer asks about the differences between OpenScene and this work, in particular the reviewer suggests that OpenScene already solves the task of "open-vocabulary 3D instance segmentation". We disagree: OpenScene mainly addresses *semantic* segmentation (i.e., it predicts per-point features), while OpenMask3D addresses *instance* segmentation (i.e., it predicts a set of object masks and associated features). Both tasks are fundamental computer vision problems, that are well defined and conceptually different. "Open-vocabulary 3D object search" in OpenScene primarily identifies a single point in the scene that matches the query the best. "Image-based 3D object detection" in OpenScene identifies the set of points that have close similarity to the given image-based query (e.g. a chair image), however it cannot differentiate between multiple instances of the same object class (e.g., it cannot predict a segmentation mask for Chair 1 and a different mask for Chair 2). In the light of this, we would like to clearly outline the main conceptual differences between OpenMask3D and OpenScene. OpenMask3D is *mask-based*, while OpenScene is *point-based*. Specifically, OpenScene outputs a per-point feature representation (num_points, feature_dim). OpenMask3D outputs a set of binary instance masks (num_masks, num_points), and corresponding per-mask features (num_masks, feature_dim). We would like to draw the reviewer’s attention to Fig. 1.a and Fig. 1.c in the rebuttal PDF, and Fig. 7 in the supplementary material, which we believe are helpful for visualizing our following statements. Both OpenMask3D and OpenScene output task-agnostic open-vocabulary features, but our method is tailored towards identifying object *instances*. While OpenScene returns heatmaps over the scene points describing similarity to the query, it cannot differentiate between two object instances that belong to the same query. In practice, it can either retrieve a set of points from the point cloud by using a given similarity score threshold (Sec. 5 of OpenScene paper, “Image-based 3D object detection”) or can identify a single point that best matches the query (Sec. 5 of OpenScene paper, “Open-vocabulary 3D object search”). In contrast, OpenMask3D is able to segment the *object instances* that suit the given open-vocabulary query, automatically separating the multiple instances from each other. Suppose now we want to find top-k objects which match an open-vocabulary query. Although, as the reviewer correctly underlined, OpenScene is able to perform this task for $k=1$ (Sec, 5 of OpenScene paper, "Open-vocabulary 3D object search"), it cannot retrieve multiple object matches when $k>1$, because it cannot distinguish whether a point belongs to a specific object or to another one. Suppose finally we want to find how many objects are present in a room such that the similarity with a given query is higher than a given threshold. In this case OpenScene has no direct way to provide such an answer as it can only return a per-point features/labels. In brief, OpenScene and many recent 3D open-vocabulary approaches need consequent steps to identify and separate object instances, which poses many practical limitations. As the reviewer also highlighted, “the ability to understand and interact with a diverse range of objects becomes critically important” - particularly when coupled with the necessity to densely segment separate objects. The novelty in our work lies in its ability to directly identify object instances in an open-vocabulary setting, and its design that is tailored to efficiently focus on *instances*. **OpenScene uses OpenSeg, OpenMask3D uses CLIP** This is another fundamental difference between both methods: to obtain pixel-aligned CLIP features, OpenScene relies on OpenSeg which is a fine-tuned version based on CLIP. However, during fine-tuning, it becomes less "general" than the original CLIP (used by OpenMask3D). This effect is also shown by the results in Table 1 of the main paper, particularly in long-tail categories. ### Q2: Examples from an “open” vocabulary The reviewer asks whether we could share examples where the method identifies objects that are truly from an “open” vocabulary. Our method is indeed capable of segmenting uncommon object instances, and we would like to draw attention to several figures from our submission. In Fig. 1 (main paper), we share examples using the queries “footrest” and “angel”. In Fig. 4 of the main paper, we share additional “uncommon” object queries, such as “pepsi”. We also would like to highlight sup. mat. Fig. 8, in which *all* examples are uncommon categories. This figure highlights examples such as “Cetaphil” (a soap brand), “Roomba” (a robot vacuum cleaner brand) and “dollhouse”. We apologize if we have not referenced the figures sufficiently within the text to highlight particularly novel object queries. --- Rebuttal Comment 1.1: Comment: Dear reviewer zuiw, As the discussion phase ends today we will not be able to further clarify potential additional concerns. We would be very grateful if you could respond to our rebuttal and offer us an opportunity to further engage with your concerns and address any additional questions you might have! Thank you for your time and feedback! Best, Authors
Rebuttal 1: Rebuttal: We thank all the reviewers for their valuable feedback, we appreciate their detailed suggestions. We reply to each reviewer’s questions and concerns in the individual responses, and we have added tables and figures in the attached rebuttal PDF, which we reference and explain in the responses. Here, we also would like to provide an overview of the material in the attached document: **Table 1.** We provide 3D instance segmentation results using masks from mask module trained on ScanNet, evaluated on the ScanNet200 dataset. We identify classes (such as chair, folded chair, table, dining table ...) that are semantically close to the original ScanNet classes, and group them as “Base”. Remaining classes are grouped as “Novel”. We also report results on the full set of labels. **Table 2** - We provide additional experiments on the **Replica** dataset. We further analyze different training setups quantitatively. **Table 3** - We provide an ablation study on the hyperparameters related to the multi-scale cropping stage of our approach. **Table 4 & Table 5** - We provide an overview of the memory requirements of foundation models used in OpenMask3D and OpenScene, and the time requirements for atomic operations. **Figure 1.a & Figure 1.c** - Qualitative comparison of per-mask (OpenMask3D) and per-point (OpenScene) features. **Figure 1.b** - Illustration of our additional experiment on the Replica dataset, in which we render RGB-D images from the point clouds, and use these images as an input to our pipeline. **Figure 1.d** - Qualitative results from open-vocabulary queries on the Replica dataset, using our OpenMask3D approach. Pdf: /pdf/de1022e1fccc80ba6c6cff48d49e3650a96a79e7.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: The paper proposes to solve open-vocabulary 3D instance segmentation. It uses a class-agnostic 3D instance segmentation model to obtain instance masks, then gather multi-scale image features from multiple frames by CLIP and SAM to do the open-vocabulary classification task. Strengths: 1. The paper proposes to use instance-level features for 3D open-vocabulary instance segmentation, which is not attempted by previous works 2. The proposed module exhibits reasonable improvements, as shown in Tables 2&3. Weaknesses: 1. The design of the framework may be complicated for real-world usages. It uses SAM to do segmentation for multiple frames, and then use multi-scale images for CLIP to inference. Each component like SAM and CLIP is a large foundation model and takes a while to inference, not mention that they are used multiple times. 2. The idea is not so novel. Although such idea of using instance mask features is not attempted in 3D instance segmentation, it has been widely adopted in 2D open-vocabulary segmentation tasks, like OpenSeg[1], ODISE[2], ZegFormer[3]. 3. The experiments are not thorough. For example, the details of using SAM and RANSAC are not studied. [1] Scaling Open-Vocabulary Image Segmentation with Image-Level Labels, ECCV2022 [2] ODISE: Open-Vocabulary Panoptic Segmentation with Text-to-Image Diffusion Models, CVPR2023. [3] Decoupling Zero-Shot Semantic Segmentation. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: 1. What is the latency and inference cost of the model during inference? 2. How $k_{rounds}$ is chosen? Is there any ablation study about that? Furthermore, does it mean that SAM needs to inference $k_{rounds}\times number of frames$, and CLIP need inference $k_{rounds}\times number of frames \times number of scales$ 3. How the confidence score $s_r$ is obtained? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: See weakness Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the valuable feedback. ### Q1: What is the latency and inference cost of the model during inference? In the table below, we provide the runtime of our approach on ScanNet for generating the queryable 3D scene representation, and also for performing a query on that representation. For comparison, we additionally provide the runtimes of the original OpenScene [46]. All runtimes are measured on the same hardware. | Method | Generating 3D Scene Representation [s] | Querying 3D Scene Representation [ms] | Semantics? | Instances? | | --- | --- | --- | --- | --- | | OpenScene (2D Fusion) | 440 | 168.8 | $\checkmark$| ✘ | | OpenScene (2D/3D Ensemble) | 524 | 168.8 | $\checkmark$| ✘ | | OpenMask3D | 556 | **0.923** | $\checkmark$| $\checkmark$ | | OpenMask3D (fast) | **350** | **0.923** | $\checkmark$| $\checkmark$ | We also report the runtime of a faster variation of our model (called "fast") which runs CLIP only once per segment on a single crop and uses a smaller SAM backbone. The exact hyper-parameters are shown in the table below. This faster version performs almost as good as the original model (small performance drop of -1.3 AP$_{50}$, full table is available as Tab. 2 of the rebuttal PDF). | Model | top-k | $\mathsf{L}$ | SAM backbone | | --- | --- | --- | --- | | OpenMask3D | 5 | 3 | ViT-H| | OpenMask3D (fast) | 1 | 1 | ViT-B| ### Q2.a: Value of $k\_{rounds}$ The reviewer asks about how $k\_{rounds}$ (number of SAM runs in RANSAC) is selected and if there is an analysis study. We tried increasing values for $k\_{rounds}$ but saw only a marginal increase in performance, e.g, from 16.6 AP50 (k=1) to 16.8 AP50 (k=10). Interestingly, increasing the number of SAM runs has little impact on the runtime performance. Indeed, the table below shows (for SAM with different backbones), that the high computational cost of SAM comes when the image is set (which is done only once) and only little overhead is added when re-running the prediction based on a new set of ground truth points. Thus, performing multiple SAM predictions on the same image has no large impact on the runtime, but makes our model more robust. | Function | Backbone | time [s] | | --- | --- | --- | | SAM.set_image() | ViT-H | 0.497 | | SAM.predict() | ViT-H | 0.006 | | SAM.set_image() | ViT-B | 0.109 | | SAM.predict() | ViT-B | 0.005 | ### Q2.b: How often is SAM and CLIP called? Since we only use the top k_view views for each mask, not all frames are utilized. Therefore, the number of iterations for SAM is $\mathsf{k_{rounds}\cdot \mathbf{M \cdot k_{view}}}$ and for CLIP, it is $\mathsf{\mathbf{M \cdot k_{view}}\cdot L}$, where $\mathsf{M}$ is the number of 3D masks predicted by the class agnostic mask predictor and $\mathsf{L}$ is the number of levels used for the multi-level crop. Overall this approach is comparable in runtime to OpenScene which needs to run OpenSeg over the full frame (which is slow), whereas here we run the much faster CLIP on small crops. ### Q3: How is the confidence score $s_r$ obtained? The confidence score $s_r$ is an output of the SAM model. For each 2D mask, SAM also predicts a confidence score $s_r$ (see main paper L201, L204-207). We will describe this more clearly in the paper. In the supplementary material (Sec, 1.2.), we provide further explanations on how we utilize SAM (L52-L92), and visualize 2D mask proposals along with confidence scores returned by SAM in Fig. 3 to Fig. 6. --- Rebuttal Comment 1.1: Comment: Dear reviewer X8Ds, As the discussion phase ends today we will not be able to further clarify potential additional concerns. We would be very grateful if you could respond to our rebuttal and offer us an opportunity to further engage with your concerns and address any additional questions you might have! Thank you for your time and feedback! Best, Authors
null
null
null
null
null
null
Intrinsic Dimension Estimation for Robust Detection of AI-Generated Texts
Accept (poster)
Summary: This paper proposes a method to detect AI-generated texts based on intrinsic dimensions of sequences generated by humans and LLMs. The authors use a method called persistence homology dimension (PHD) to estimate intrinsic dimensions of both human- and LLM-generated texts. Using a variety of datasets across multiple languages, the authors first notice that intrinsic dimensions between human- and machine-generated texts tend to differ, with the intrinsic dimension of human texts being higher than that of machine-generated ones (the intrinsic dimensions again differ between languages, yet they are always differentiable from human texts). Using the estimated dimensions, the authors then train a simple single-feature logistic regression classifier to differentiate between human- and machine-generated texts. Experimental results reveal that their method substantially surpasses existing baselines (e.g., DetectGPT, GPTZero), and interestingly the method remains robust against paraphrase attacks conducted using DIPPER. Finally, the authors show that their method is more robust against samples authored by non-native speakers than existing baselines. Strengths: * The authors propose a novel method to detect AI-generated content, outperforming existing works. * The method is effective yet easy to understand, and I believe researchers working on similar topics would be quite interested in the presented results. * The additional experiments showing that the model is robust against paraphrase attacks further strengthen the method’s relevance. Weaknesses: * The analysis could be more extensive. For example, given that the classifier is based on a single feature, it would be interesting to see how performance varies as a function of dataset size, and how vulnerable its generalization capabilities are to ‘noisy’ datasets (i.e., those where dimensions for individual samples have high variance). While this has partially been explored (Figure 2), showing classification results in the Experiments section would have been informative. * It would be nice if Table 1 could be extended for additional approaches to ID estimation (line 188 mentions that 12 approaches have been explored). Showing results across those approaches (e.g., in a boxplot) would give the reader a better notion of how well the differences in dimensions between human- and AI-generated texts generalize across methods. Technical Quality: 3 good Clarity: 3 good Questions for Authors: None Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors clearly outline the limitations of their work in a dedicated section. However, a few additional words on the broader impact (e.g., the applicability of the method in practice) would be helpful. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your review! We have considered adding more analysis of ID estimations for the texts to appendices. In fact, we have explored some types of "linguistic noise", i.e., deviations from standard language. In particular, we use data from Reddit that contains quite informal texts (with a notable amount of colloquialisms) as one of our primary datasets; the standard deviation of ID estimates for them is ~28% higher than for Wikipedia texts, but the mean is still the same. We also analyze the performance of our method on texts produced by non-native speakers, and it also shows a higher variance of ID estimates. We leave more detailed investigations of the impact of other kinds of noise in data for future work. Due to space limitations, we could not include the data for other ID estimation approaches in the primary text, so we limited Table 1 to two methods only. Thank you for this comment; we will add an extended version of Table 1 (with other ID estimators) into the appendix. --- Rebuttal Comment 1.1: Title: Acknowledgement of rebuttal Comment: Thanks to the authors for addressing my concerns. I would encourage you to add the details provided in your response to the paper (as you have suggested). As already indicated with my initial scores, I believe this is solid work and therefore recommend acceptance.
Summary: The paper describes work on detecting AI-generated text using intrinsic dimensionality (ID) estimation methods through Persistent Homology Dimension (PHD) (and MLE). The authors motivate this approach by highlighting that written texts between machine and humans observe differences in topological representation. The authors explored this method using an extensive set of experiments covering various SOTA models for detection, such as DetectGPT, GPTZero, etc., across multiple languages in a crosslingual setup and across multiple source domains of data such as Reddit, StackExchange, and Wikipedia. Results show that using ID as a feature generally performs consistently and somewhat robustly across other methods, especially in general detection. Overall, I believe the paper is a good addition to NeurIPS, especially as it gives a unique perspective on detecting machine-generated texts that’s theoretically motivated. Strengths: There are a number of things that I liked about the paper that contributed to my overall recommendation. First, the paper gives a unique point-of-view to the standard AI-generated text detection task which deviates from the usual approach of looking at surface-based style and linguistic characteristics. The paper shows strong motivation and support as to why intrinsic methods such as PHD shows substantial differences between human-written and machine-generated texts. Second, I find merit in the breadth and depth of the experiments conducted which extends to crosslingual and crossdomain settings. It is also good that the authors explored a number of generative models and targeted some of the concerns with detecting machine-generated text such as errors with texts from non-native speakers. Lastly, the paper is readable enough for a larger variety of readers to appreciate its contributions. With these points, I recommend the paper for acceptance. Weaknesses: There are no major issues in the paper that I’m particularly concerned about. There are some points, however, that I would recommend to be given more emphasis and clarification to improve the quality of the discussion as well as to increase confidence with the results. 1. I would have appreciated it better if the authors provided their clear inferences as to why the PHD-based detector, even with a paraphraser, is substantially better than the other classifiers. If the result indeed shows that the method is effective, what does it imply? Does it suggest that using a non-linguistic representation such as the intrinsic dimension of data compared to other linguistic factors considered by DetectGPT, RankGen, etc. is more practical? Quantitatively, how much offset in performance do paraphrasers like DIPPER affect the proposed classifier as well as the other detectors? It would also be interesting and a good addition in the discussion if the authors can show that even using paraphrasers, the proposed detector is still significantly effective through a statistical test. 2. There are a few vague statements in particularly focused on parts of the discussion of results. In the cross-domain and cross-model performance, statements such as “the PHD classifier is not influenced by domain transfer” and “PHD classifier to be more robust to entirely new AI models” need further clarification and even rephrasing. Achieving relatively favorable performance on three datasets hardly makes any model robust and immune to domain shift unless proven in-depthly with more data and/or backed by statistical tests. 3. It seems odd that ChatGPT was never mentioned anywhere in the statement of models used for experiments and then appears on one of the results section. Was this portion rushed? Justification is needed for this as readers would expect all experiments on evaluating generations to come from what was previously declared and should be uniform for all experiments. Also, there’s no “GPT3.5-davinci-003” in OpenAI’s model endpoints (https://platform.openai.com/docs/models/model-endpoint-compatibility). This may have been a typo so kindly clarify. 4. Same recommendation also applies where the authors should provided more discussion instead of describing the values on the table as seen in the experiment on non-native written text vs. AI. It was observed that existing models such as GPTZero and OpenAI tend to have higher rates of misclassifying non-native text as AI-generated but not so much for PHD and especially using MLE-based classifiers. Thus, what are the author’s insights as to why MLE performs better than PHD for this type of experiment? The authors previously mentioned that MLE performs well on “small samples, noisy settings, and small variance” so how does this ties up with the result? What gives MLE the edge over PHD in this scenario? 5. There are mentions of using specific splits of existing datasets in the paper. These should all be added as a separate table in the Appendix. It’s quite challenging as the reader to approximate how large the data used specifically for the study. This information is also used when analyzing the model performances. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: Aside from the supporting questions highlighted in the Weaknesses section, here are some points that further require clarification: 1. Is it scientifically correct to refer to the intrinsic dimensionality methods as non-linguistic approach for detecting AI-generated texts? To me, the method seems to merit its own category compared to approaches using stylistic properties of texts. 2. What decoding hyperparameter values were used? Are these uniform across all model generation setups? 3. Is it possible that the training setup of RoBERTa contributed to it being more robust across various domains? RoBERTa’s performance was highlighted but the authors did not provide any of their own inferences. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 3 good Contribution: 4 excellent Limitations: The added limitations section seems acceptable but I would recommend to further clarify the breadth of dataset used to merit claiming robustness and applicability to domain shift. It might be worth comparing to previous literature on how much data they used before they can refer to one model as robust. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Weaknesses. 1. We suppose that modem language models can mimic human-written texts very well in terms of common linguistic properties such as grammar, semantic, style etc., but there are subtle differences that can be captured via numerical analysis of the topology of text embeddings or the curvature of the probability function, somewhat similar to the approach of DetectGPT. We consider it as an important and interesting direction for future work. 2. Naturally, we cannot state that our method will work in any domain shift situation. Our point was to demonstrate that ID of the text embeddings is a reliable characteristic of text that helps to discover the above-mentioned subtle differences in text data and disentangle them from text style and semantics. 3. Thank you for your remark! By “GPT3.5-davinci-003” we meant the model “text-davinci-003” from /v1/completions. By “ChatGPT” we meant the model “gpt-3.5-turbo” from /v1/chat/completions. We often refer to both models as “GPT-3.5” or “GPT-3.5 family” in the introduction and other parts of the paper because both are listed as such in the list of “GPT-3.5 models” at https://platform.openai.com/docs/models/gpt-3-5 . If accepted, we will review our naming and make it more clear and precise in the text (i.e., replace “GPT3.5-davinci-003” by “text-davinci-003”; specify more clearly that we refer to both davinci and ChatGPT as “GPT-3.5” in the introduction etc.). 4. Table 4 shows that PHD and MLE have roughly the same ROC-AUC for ChatGPT texts in English. The higher accuracy on 1% FPR means that MLE provides a shorter left “tail” on real data, i.e. there are fewer real texts with very low MLE. We suppose that this is exactly because it works better with small samples. As we have mentioned in the answer to Reviewer 1, many of the PHD extreme cases are too short. 5. Thank you for pointing this out. We planned to upload exact splits into the Github repository with the code for this paper, but we can also add tables with sizes of used splits to the appendices. Questions Q1. Our approach can be called “non-linguistic” in the sense that we are not working with text or language *directly*; instead, we analyse its mathematical representation (which, of course, inherits various features of text). Q2. Yes, they are uniform for every model across all generation setups. For our dataset, we used ChatGPT with default parameters (temperature 0.7). We used texts generated by GPT-2, GPT-3.5, and OPTb that were published by Krishna et al. (authors of "Paraphrasing evades detection..."), and detailed information on the choice of hyperparameters can be found there. We followed the procedure from the paper, and in experiments with the intrinsic dimension reported results on data without watermarking (strength_0.0). Q3. Indeed, RoBERTa (which is exactly “Robustly optimized BERT”) is developed to be robust. The important factor which is shown to improve the performance on downstream tasks is increasing the training dataset by an order of magnitude and, importantly, adding more variability to the data. While BERT was trained on Books and Wiki only, RoBERTa was fed also with a large web crawled corpus. So, both Reddit and StackOverflow are the types of data seen during pretraining. We believe that this is, indeed, the root cause of the robustness of ID estimation for RoBERTa embeddings. We suppose that ID estimation is more stable for data familiar to the embedding model. We plan to test this hypothesis in our further study. --- Rebuttal Comment 1.1: Title: Response acknowledged. Score remains the same (Accept) Comment: This is to acknowledge that I have fully read both my fellow reviewers' feedback as well as the author/s' response to my own assessment. For the rebuttal period, the authors have provided some additional information for my concerns listed in my review (see weaknesses section). With this, I would like to summarize some points that the authors are strongly recommended add to the final paper in case of acceptance. This will ensure that any claims made on the paper are properly supported with evidence as well as discussed thoroughly. 1. In response to #1 weakness, the authors say that *"there are subtle differences that can be captured via numerical analysis of the topology of text embeddings"*. This is still vague, in my opinion. As suggested, it would be better to show a statistical test of difference on a number of runs with the ID method (say with different splits) for Table 2 or in Figure 6, with or without paraphrasers. 2. In response to #2 weakness, the authors say that *"we cannot state that our method will work in any domain shift situation."*, however when you read the paper, there are mentions of *“the PHD classifier is not influenced by domain transfer”* and *“we can expect the PHD classifier to be more robust to entirely new AI models”*. I suggest rephrasing these sentences instead, as it gives an inflated expectation for the classifier model. 3. Response to #4 weakness and Question #3 are good additions in their appropriate sections in the paper given provided more details. I was specifically looking for these in the paper. Nonetheless, I would like to thank the authors for their efforts and for being very detailed in their responses to my questions and concerns. My score will remain the same (7 - Accept) and would be happy to vouch for the paper to be accepted to the conference.
Summary: This paper proposes a new method for artificial text detection(ATD) with intrinsic dimension (ID) estimation. First, contextualized representation of tokens in the text is extracted by a RoBERTa model. Next, the author estimates the ID of this set of contextualized representations : (1) N tokens and their corresponding vectors are sampled from the set, forming a vertex set; (2) the persistent homology dimension (PHD) is estimated by measuring the lengths of the edges in a minimal spanning tree with the vertex set; (3) varying the value of N to obtain a set of N-PHD pairs, and measuring the ID based on the slope of their linear correlation. Finally, the ID is utilized as a feature in a logistic regression model for binary classification. The author has conducted extensive experiments on widely-used benchmarking datasets. The results demonstrate that the proposed method exhibits greater robustness when compared to existing ATD methods, particularly in terms of its resilience towards adopted AI models, paraphrase attacks, and non-native speakers. Furthermore, in comparison to a conventional RoBERTa-based classifier, this method showcases significantly improved out-of-domain performance in both domain and AI model transfer scenarios. Additionally, the author has curated a new dataset for multilingual ATD, although specific details regarding this dataset are not provided. Strengths: 1. The application of persistent homology and intrinsic dimension estimation for ATD is both intriguing and well-founded in theory. While the methodology is similar to that of [1], which is already cited in the paper, this method relies on fewer features and conducts more comprehensive experiments to assess the robustness of competing methods. Hence, the proposed method and its findings exhibit a significant level of novelty. 2. The proposed method has achieved promising performance, particularly in terms of robustness. This is mainly because it does not need to finetune parameters of a large Transformer model, nor does it assume reliance on an AI model that is likely to generate the text. 3. Extensive experiments and analysis are done to demonstrate the effectiveness of the method. [1] Kushnareva, Laida, et al. "Artificial Text Detection via Examining the Topology of Attention Maps." EMNLP. 2021. Weaknesses: 1. Some claims are not well supported. The author claims that they estimate the intrinsic dimensions (ID) of text data. In fact, what they really estimate is the ID of the contextualized representations. By contrast, [1] directly estimates the ID of natural images. 2. Although extensive experiments have been conducted, one important analysis is missing: investigating the impact of the decoding algorithms. One important assumption of this work is that the metric space of human-written text has more isolated sub-graphs, which can be incurred by rapid shift of topic, usage of rare words, and so on. AI models can also achieve these characteristics by adjusting the hyperparameters of decoding algorithms, such as reducing the softmax temperature or increasing the values of P and K in nucleus and top-K sampling, respectively. As seen in Table 2, PHD has better performance for detecting GPT-3.5 than GPT-2, mostly because GPT-2 tends to generate text that has more grammar mistakes and less akin to human language. Consequently, the reliability of the method in the face of different sampling techniques becomes questionable. 3. The paper is not very easy to follow: some important details and definitions are missing. To name a few: * Some background information about topological features, especially connected components, are necessary to understand the estimation of persistent homology dimension. * Lack of definition of $C$ in Line 177, as well as $\tilde{C}$ in Line 183. * In Section 3, the author solely explains the estimation process for $E^{i=0}$ without explicitly indicating that they set $i=0$ until Section 4. It would be clearer if this information was explicitly stated earlier. * The author lists the new dataset as one of the main contributions of their work. However, no specific details are presented. It would be helpful to know the dataset's statistics, the curation process employed, as well as any annotation, cleaning, or pre-processing steps that were carried out. [1] Pope, Phil, et al. "The Intrinsic Dimension of Images and Its Impact on Learning." ICLR. 2020. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: ## Typos 1. Line 45: some downstream task(s) 2. Line 177: where equivalence mean(s) ## Suggestions 1. It would be helpful to include some text examples with high and low ID. ## Questions 1. Why RoBERTa-large has higher variance than RoBERTa-base(Figure 5) 2. What does the * mean in Table 2? 3. Why do you shuffle the order of datasets in Table 3? It is weird that PHD has higher performance on OOD than ID. 4. Why https://arxiv.org/abs/2104.08894 report much higher dimension for images? 5. Line 177: “equivalence mean that…” where is the equivalence? 6. Caption in Figure 6 is confusing: the title of the right-hand-side graph says it is the success rate of classification (higher the better) but the caption says it is the decrease of performance (lower the better). 7. What is the ID of the text generated by randomly sampling tokens from the vocabulary? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: The limitations are fairly elaborated. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your review! First, we address the “Weaknesses” section. 1. We use the notion “ID of text” for simplicity; in contrast to image processing, text embeddings are the main numerical representation of texts widely used in NLP nowadays, so we believe there is very little chance of confusion caused by this inaccurate terminology. Text data, regardless of the type of embeddings, has common properties, e.g., it is discrete, and there is a relatively small number of points available for each text. Our methods of ID estimation are able to deal with such types of data. Experiments of this work are restricted to text embeddings obtained by Transformer MLMs. But our focus is not on the properties of these embeddings; we show that embedding sets obtained by the same model have different estimated IDs for different types of data, and we study the properties of data via their embeddings’ ID. To be precise, we estimate the ID of a set of embeddings obtained by RoBERTa-family models, for both natural and generated texts, and show that they differ. We thank you for pointing out this confusion and will clarify it in the text if accepted. 2. Unfortunately, space and time constraints precluded us from performing a thorough analysis of the impact of the decoding algorithm parameters and including the results of this analysis in the paper. We will include them into the appendices after concluding additional experiments. We can note here that ID estimation of texts obtained from GPT-3.5 exponentially depends on temperature: texts obtained with temperature values up to 1.2 usually have lower ID than human-written texts, but if temperature is at least 1.7 then generated texts have ID notably larger than human-written ones. As for GPT-2, we found that top-K sampling (with K=40) on average leads to ~15% lower values of ID. 3. We thank you for noting unclear details in the paper. We will fix them and add the necessary explanations in the final version if accepted. 4. Thank you for this suggestion! We will definitely add this information to the paper if accepted. Questions Suggestion 1. Please find some examples in Table 1 in the attached file. It seems that most examples with lower ID contain a lot of addresses, geographical names, or proper nouns. It may be connected with the usage of rare tokens or with more numbers in the text than usual, but more experiments are needed to be certain. Some examples, however, are just very short texts, for which our method of PHD calculation is less reliable. This means that there is high variability of the ID values for the *single* sample, calculated with different random seeds. On our estimation, this kind of data occupy the lowest and highest 0.5% of the distribution. Q1. We hypothesize that it may be caused by larger embedding size in RoBERTa-large (1024 compared to 768 for RoBERTa-base). It is known from literature that increasing the latent space dimension may often negatively affect the stability of individual ID estimates for fractal-based methods [1] (and PH-dim is one of such). See: Camastra and Staiano, Intrinsic Dimension Estimation: Advances and Open Problems, Information Sciences, vol. 328, pp. 26-41, 2016. Q2. * means that this number is not directly comparable with other numbers in the table because the detector uses weights of exactly the same model that was used for generation, so it does not belong to the class of universal detectors. This issue for the pair DetectGPT-GPT2 is mentioned in the subsection “Comparison with universal detectors” (l. 262-263 and 268-269). We will add this explanation to the table caption if accepted. Q3. This is a typo in table heading; thank you for pointing it out. The correct order of labels is given in the left half of the table: Wikipedia, Reddit, StackExchange. Meanwhile, numbers are given in the correct order. Q4. Thank you for the useful reference! The setup of this paper differs from ours. We estimate the dimensionality of each text separately; they estimate the dimensionality of the entire dataset considering every image as a separate point. Naturally, such estimation yields larger numbers with larger variation. Table 1 and 2 and Figure 1 in that paper show that ID estimation depends both on the dataset and estimator properties, and varies from 7 to 45, while our estimation mainly lies in the 7-12 range. As mentioned above, our estimation also depends on the properties of the embedding model. Q5. It is related to the equation from the same line: $E^0_\alpha(X) \sim C n^{\frac{d-\alpha}{d}}$, here $\sim$ is read as “equivalence”. We will state it more clearly in the final version if accepted. Q6. We opted to preserve the style of figures from the paper where those experiments were originally introduced. The plots themselves are correct (lower is better on the left-hand size, while higher is better on the right-hand side); we will clarify it in the caption of the figure. Q7. In Figure 1 of the attached file, we show boxplots of IDs of texts made up of repetitions of the same tokens (const), generated by davinci, written by humans, and generated by sampling random tokens from the RoBERTa vocabulary. It supports our hypothesis that embeddings of more homogenous texts tend to have lower ID, while embeddings of more heterogeneous texts tend to have higher ID. --- Rebuttal Comment 1.1: Comment: I have read the response, and I am still leaning towards accepting the paper.
Summary: The paper proposes using the intrinsic dimensionality of the manifold underlying the set of embeddings of a given text to detect AI-generated texts. Because the average intrinsic dimensionality of AI-generated texts is lower than that of natural language. It is found that the intrinsic dimensionality of different languages varies a lot. Strengths: - The paper itself is novel - The finding that the intrinsic dimensionality of different languages varies a lot is interesting. - The method is robust in cross-domain and cross-model scenarios. Weaknesses: - The paper does not discuss the reason for the difference of intrinsic dimensionality in different languages. - The reason why the method works isn't clearly understood. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - Intrinsic dimensionality is calculated in BERT/RoBERTa emebedding, I wonder if the *syntax of individual language* leads to the *difference of intrinsic dimensionality in different languages*. - Is there any analysis on the bad cases (incorrectly-classified cases)? which might help better understand the method. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: See the Weaknesses Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your review. In our work, we study Intrinsic Dimensionality estimation for text embeddings and show experimentally that this mathematical value can reflect some useful information about the given text, namely, helping to separate artificial and human written texts. We admit that a huge amount of important questions are left out of the scope of this paper. To the best of our knowledge, we are the first to show that ID estimation of text embeddings applies to this kind of downstream task, and just couldn’t cover all the questions related to this method in a single paper. One of the most important questions is the one raised by Reviewer SSuo: why does this method work? Indeed, this has not been clearly understood yet. There is mathematical intuition behind the notion of intrinsic dimensionality; using it, we can conclude that GPT-generated texts have a smaller number of degrees of freedom, i.e. less creative in some sense. We believe that there are some subtle semantic, syntactic, stylistic, or statistical differences, which are reflected by ID value and could support this intuition. Unfortunately, so far, we could not discover these properties; but our work proves experimentally that some kind of differences exist. We hope that future research will shed some light on this question. Answer to Q1. The difference between the ID of different languages could be caused by many reasons: the properties of the language, e.g. analytical vs. synthetic; the script (e.g. syllabic or letter); the quality of the language representations of the embedding model; the quality of the data (e.g. Wikipedia articles can have different quality for different languages), etc. Interesting to note that in Fig. 4 Asian languages have smaller ID than European; among European, related languages are often grouped (Russian and Ukrainian, Spanish and Italian). We can hypothesize that geographical and linguistic connections between languages lead to similar ID; this hypothesis should be a subject of accurate investigation, and we left it for future research. Answer to Q2. As for bad case analysis, we noticed an interesting tendency among the examples with the lowest ID of human-written texts (those will be surely misclassified). It seems that most of these examples contain a lot of addresses, geographical names, or proper nouns. It may be connected with the usage of rare tokens or with more numbers in the text than usual, but more experiments are needed to claim it for sure. Some examples, however, are just very short texts. Examples are provided in Table 1 in the attached file. We didn’t notice any significant anomaly in the top ten examples with the highest ID of Davinci-generated texts, except that all these texts were very short. We hypothesize that PH dimension estimation doesn’t work properly on short text. More thoughtful analysis of bad cases is a matter for future work. On Fig. 1 in the attached file we provide extreme cases analysis. The most simple samples have the lowest ID, and the highest ID corresponds to the completely random ones. It corresponds to the general understanding of ID as the amount of degrees of freedom in the data.
Rebuttal 1: Rebuttal: We are thankful to all the reviewers for their inspiring reviews! In the attached file, we provide examples and figures illustrating extreme cases of ID values. We discuss these results in the direct answers to the reviewers (SSuo and oVCJ). Pdf: /pdf/c84d48f2166a10a70ce42642e9499a5bcad27ade.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Constant Approximation for Individual Preference Stable Clustering
Accept (spotlight)
Summary: This paper continues the study of individual preference stability (IP stability) initiated by Ahmadi et. al. (2022). Given a set of points in a metric space, a k-clustering is said to be 1-IP stable if for every point in the set, its average distance to the points in its cluster is at most its average distance to the points in a different cluster; so, in this sense, a point prefers its own cluster to other clusters. There are datasets for which a 1-IP stable clustering does not exist, and further, it is NP-hard to decide whether a dataset admits a 1-IP stable k-clustering. In light of this, it is natural to broaden the definition to approximate IP stability: a k-clustering is \alpha-IP stable if the average distance of a point to another cluster is at most \alpha times its average distance to its own cluster. The work of Ahmadi et. al. shows that an O(n)-IP stable clustering always exists and gives an algorithm for computing one. The present work closes the gap between the known lower and upper bounds: the authors propose an algorithm that always outputs an O(1)-stable clustering (so they also prove that one always exists, a fact not already established). Interestingly, the output clustering has the additional property that it is a constant factor approximation for k-center. Indeed, the greedy k-center algorithm of Gonzalez is a phase in their algorithm, and the "ball-carving" phase of their algorithm is similar in spirit to another approximation algorithm for k-center (although there are important nuances in the algorithm in the present work that differentiate it). The fact that the algorithm doubles as an approximation algorithm for k-center, the authors note, ensures that the O(1)-IP stable clustering is in some sense meaningful; for, they demonstrate that a random k-coloring a graph will produce an O(1)-IP stable clustering (for certain k), but of course such a clustering is not meaningful in general. The authors also introduce two variants of IP stability. For Min-IP stability, in which "average distance" is replaced with "min distance" in the definition of IP stability, they show that the Single Link algorithm produces an exactly stable (optimal) clustering. For Max-IP stability (defined analogously), they show the greedy k-center algorithm of Gonzalez gives a 3-IP stable clustering. Finally, the authors show experiments comparing their algorithms to k-means ++ as a baseline. While their algorithm performs slightly worse than k-means ++ for IP stability in general, they claim its robustness by showing a hard instance for which it outperforms k-means ++. They also show that in practice k-means ++ can be up to five times worse than their (optimal) algorithm for Min-IP stability. Strengths: - This work closes the (previously large) gap between upper and lower bounds for IP-stability. It demonstrates the existence of constant-stable clusterings and how to find them efficiently. It also provides a convincing argument that a clustering produced by the algorithm is meaningful by showing that it also gives a constant approximation for k-center, which I think is a key contribution given that the authors show, on the flip side, that a clustering can satisfy O(1)-stability but otherwise not uncover meaningful structure. - The algorithm takes inspiration from approximation algorithms for k-center but there are important subtleties in the adaptation of these algorithms. The Ball-Carving algorithm (Algorithm 1) is a more nuanced version of an approximation algorithm for k-center (in which one guesses the optimal radius, and then repeatedly cuts out balls of that radius until no points remain). In Algorithm 1, more care is taken as to how to choose the center of the ball at each iteration (instead of being chosen arbitrarily as in the approximation algorithm) as well as how to assign points to centers (instead of just carving out a ball of a certain radius). Finally, the centers from running the algorithm of Gonzalez are used to prune the number of clusters produced in the carving algorithm. - The authors repurpose existing algorithms (Single Link clustering and the greedy algorithm of Gonzalez) to give guarantees for Min- and Max- IP stability. This is interesting in its own right because it shows that the notion of stability is intimately related to other clustering objectives. Weaknesses: - While Algorithms 1 and 2 are described very clearly, little motivation or intuition is given. For instance, it would be useful to provide examples that show the necessity of the nuances in the Ball-Carving algorithm. - A discussion of how Algorithms 1/2 differ from or build on the algorithms of Ahmadi et. al. would help highlight the novelty of the contributions. - A more specific comment: The paragraph on lines 66-70 seems inconsistent with later claims (unless I have misinterpreted something). It seems that if the parameter r is chosen to be anything, then the properties claimed in lines 66-70 hold, but potentially with more than k clusters. Only when r is chosen as a function of r_0 and then the pruning step in Algorithm 2 is performed do the properties seem to hold with k clusters. If this is the case, then I think the current wording in lines 66-70 is easily misinterpreted. Still, the stronger guarantee of uniform IP stability with r chosen as a function of r_0 is noteworthy. - It seems that Algorithm 3 is simply the well-known algorithm of Single Link clustering. This should be labelled as such and cited. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: - A variant of Single-Link clustering is used in the work of Awasthi, Blum, and Sheffet (2012) to find optimal clusterings on perturbation stable instances. Given the algorithmic similarities, do you see any connections between these two notions of stability? - Do you have any intuition as to why k-center approximation algorithms in particular are useful for IP-stability algorithms, given that IP-stability (in its original form) is based on average distances? While k-median and k-center are in general different notions, it is perhaps not clear a priori which objective could help inspire an algorithm for IP-stability. Have you considered the question of finding an algorithm that produces a clustering that is O(1)-IP stable and also constant approximate for other objectives? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: Not applicable. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their comments. We address your concerns and questions below. > While Algorithms 1 and 2 are described very clearly, little motivation or intuition is given. Thanks for the suggestion. It did indeed require a good amount of work for us to figure out the details of the ball carving algorithm and we considered several simpler versions of the algorithm which didn’t work. As such, we realize that some motivation is in place. We will include this in the paper and provide some examples showing that various other natural approaches do not work. > A discussion of how Algorithms 1/2 differ from or build on the algorithms of Ahmadi et. al. would help highlight the novelty of the contributions Our algorithm is completely different from the algorithm of [Ahmadi et al] and is more sophisticated. Their O(n)-approximation relies on the standard metric embedding result to line metric, together with a simple greedy algorithm for IP-stability on line metric. The discussion of previous results is in line 51-55 of the paper. > The paragraph on lines 66-70 seems inconsistent with later claims You’re correct, Algorithm 1 works for any value of r but provides no guarantee on the number of clusters and we only get the property to hold with exactly k clusters for the right value of r which depends on r_0 (which in turn depends on P and k). We will make it clear in lines 66-70 that r depends on the set of points P and on k. > It seems that Algorithm 3 is simply the well-known algorithm of Single Link clustering. Yes, we will provide a citation appropriately. > A variant of Single-Link clustering is used in the work of Awasthi, Blum, and Sheffet (2012) to find optimal clusterings on perturbation stable instances. Given the algorithmic similarities, do you see any connections between these two notions of stability While the line of research on perturbation stability focuses on designing faster or more accurate algorithms utilizing the strong stability (and separability) conditions, here we aim to recover such stability conditions in general metrics approximability. We will expand on the discussion of stability in clustering in the related work section (line 158-165) and add comparison with the result of Awasthi, Blum, and Sheffet (2012). > Do you have any intuition as to why k-center approximation algorithms in particular are useful for IP-stability algorithms, given that IP-stability (in its original form) is based on average distances? k-center is particularly useful in the context of uniform approximate IP-stability. After getting said uniform property from the clustering output by Algorithm 1, it is natural to next consider the k-center algorithm. Indeed, for k-center, we have the relation that the minimum distance between clusters is also an upper bound on the distance from any point to its nearest center. This relation gives that when running Algorithm 2, we simultaneously get that each cluster is non-empty and the uniform upper bound on the diameter of each cluster in the final clustering. This property does not necessarily hold for k-median or k-means. We will add a comment on this in the paper. > Have you considered the question of finding an algorithm that produces a clustering that is O(1)-IP stable and also constant approximate for other objectives? Simultaneously achieving approximate IP stability as well as a global objective other than k-center is a very nice open question for future work (we also list it in Appendix D). We don’t see a way of directly extending our algorithm to k-median or k-means, for instance, because we use the diameter guarantee which is more specific to k-center. --- Rebuttal Comment 1.1: Comment: I thank the authors for their detailed response. While the problem itself is fairly new as other reviewers point out, its formulation is natural and contributes to the growing literature on fairness and stability in clustering. The repurposing of existing algorithms is both subtle and clean, and the simultaneous guarantees for k-Center are appealing. Moreover, there are exciting follow-up questions for further research. I maintain my evaluation.
Summary: This paper considers the problem of finding stable clusterings under a specific notion of stability - individual-preference stability which roughly requires the clustering produced to have the property where the average distance of any datapoint to points within its own cluster to be smaller than the average distance to points within any other cluster. The clustering is said to be $\alpha$-stable if the average in-cluster distance to be no more than a multiplicative $\alpha>1$ factor of the average distance to points assigned to any other cluster. This is a fairly natural problem, and was recently proposed by Ahmadi et. al. (ICML 22), who gave some preliminary results for it, including NP-Hardness of deciding whether the input dataset admits a $1$-stable clustering. This paper makes three key contributions - (a) It shows that given any dataset with $n$ points in a general metric space, and any desired number of clusters $k$, there always exists a $k$-clustering that is $O(1)$-stable. (b) This $O(1)$-stable $k$-clustering can be found by a computationally efficient algorithm. Moreover, the resulting clustering produced is a constant factor approximation to the optimal $k$-center objective. (c) For min and max stability (i.e. where the average is replaced with min and max, respectively), they show that for any dataset in a general metric space, and for any choice of $k$, (i) a min-stable $k$-clustering always exists, and is achieved by the usual early-termination of Kruskals minimum spanning tree algorithm, and (ii) a 3 approximate max-stable $k$-clustering always exists and is achieved by the greedy $k$-center algorithm. Strengths: This paper considers a very natural question of obtaining a stable clustering, and substantially expands our understanding of this problem. Overall the paper is quite well-written and easy to read, and would be a well-suited for a venue such as NeurIPS. (a) They show that given a set of any $n$ points in an arbitrary metric space, a $O(1)$-stable $k$-clustering always exists for any choice of $k$. Moreover, the clustering includes all $n$ points. The only known results prior to this work was the existence of a $1$-stable clustering for any set of $n$ points in a $1$-D Euclidean space or metrics induced by a weighted tree, or a bicriteria $O(\log^2 n/\epsilon)$ stable clustering achieved by discarding an $\epsilon$-fraction of the points from the input set. For the stricter requirement of clustering all $n$ points, the only known result was a trivial $O(n)$-stable clustering. (b) The algorithm that finds this $O(1)$-stable $k$-clustering is simple and quite computationally efficient. The earlier bicriteria approximation result was achieved by applying the HST hammer naively. The new algorithm has the additional desirable property that the resulting clustering produced achieves a constant factor approximation for the $k$-center objective. Weaknesses: The paper has two main weaknesses, which in my opinion are not deal breakers given its other strengths. (a) The extension to $f$-stable clusterings is obvious, and the results for $f=$ min and max stable clusterings are also very elementary. It would have been far more interesting had the authors characterized properties of a generic $f$ required such that any instance always admits an $O(1)$-$f$-stable $k$-clustering. (b) The fact that the algorithm achieves a $O(1)$ approximation to the $k$-center objective seems like an afterthought. It doesnt seem like the algorithm was explicitly designed to additionally achieve good approximation guarantees for this other objective, and more like it happened to be that way. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: I understand that these questions are out of scope of this paper, but im curious about the following thing - Given how you say that its only interesting to look at stability when the algorithm additionally gives good approximation guarantees for other natural clustering objective functions, I really want to know what happens when you simultaneously want to optimize for other objective functions such as say $k$-means or $k$-medians. Is it possible to get a clustering that achieves a good approximation for these standard clustering objectives, while simultaneously being approximately stable? Or are there cases where these two objectives of clustering "quality" and "stability" are at odds of each other - achieving one objective must necessarily incur a large penalty for the other objective? My other question is related to your extension of the notion of stability, and one I have raised before in the weaknesses - can you characterize properties of a generic $f$ required such that any instance always admits an $O(1)$-$f$-stable $k$-clustering? Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: I dont see any obvious limitations of this work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their comments. We address your concerns and questions below. > The extension to f-stable clusterings is obvious, and the results for min and max stable clusterings are also very elementary Indeed, the results for min and max stable clusterings come from quite straightforward observations, but we think it shows a nice connection between the individual preference objectives and classic algorithm problems (min spanning tree and k-center). The research question about generic f is a nice one for future work. It would be quite interesting to see if a general property holds that admits stable f-clusterings especially since the algorithms we have for min and max-stability use quite different techniques. > The fact that the algorithm achieves a O(1) approximation to the k-center objective seems like an afterthought From the point of view of our research process, this is correct: our main goal was to develop algorithms with small approximation factors for IP stability. The serendipity of the additional k-center guarantee doesn’t detract from the fact that simultaneously achieving individual and global clustering guarantees seems like it would be very valuable in practice (as we discussed in Section 1.2). A very nice open question (Appendix D and your question) is if good IP stability approximation can be combined with other global objectives. We believe our response above also answers your questions. --- Rebuttal Comment 1.1: Comment: Thank you for your response. And as I said, I am already quite happy with this submission, and do think it should be accepted. My comments and questions were more a product of curiosity than anything. I do agree that trying to understand under what conditions can stability be combined with other objectives, and are there objectives that are inherently at odds with stability is an interesting question that one should pursue following this work. I am leaving my positive score unchanged.
Summary: This paper concerns alpha-individual preference (IP) stable clusterings, meaning clusterings such that its average distance to points in its cluster is at most alpha times greater than to that in any other cluster. Previously, only O(n)-IP solutions were known, 1-IP solutions were known to not always exist, and we did not know if there existed O(1)-IP solutions. This paper answers affirmatively on metric spaces. They provide an O(1)-IP algorithm on metric spaces which additionally is a constant-factor approximation to the k-center problem. The algorithm follows the popular greedy ball technique on metrics, which selects radius-bounded balls (bounded by O(1) times the optimal k-center solution) that cover many points and are distant from each other, and then fixes unassigned points (according to the greedy cluster order) and merges some clusters according to their closest k-center centers. Additionally, they define variants Max-IP and Min-IP (where you instead look at the maximum/minimum distances, as opposed to the averages). They show Min-IP can be solved optimally by running Kruskal’s algorithm until there are k trees in the forest. Additionally, greedy k-center 3-approximates Max-IP. Finally, the experimentally validate their IP algorithm on real and synthetic (adersarily-designed) datasets. On both, their algorithm outperforms k-means++, showing its robustness to hard instances. Strengths: This is a nice result that bridges a significant gap posed by previous works (i.e., the gap between O(n)-IP and O(1)-IP). In addition, it is complemented by the lower bound of 1-IP, and it implies a few nice future directions for this work, including finding the minimum alpha such that an alpha-IP solution exists and characterizing when 1-IP solutions exist. The extensions to Max-IP and Min-IP are also strong. For the most part, the writing was clear (though I did find some typos here and there). The methods seemed new in many ways, though they were heavily founded based off existing methods (i.e., clustering by balls on metric graphs). Weaknesses: The biggest weakness is that, while the improvement in approximation is significant, I think the problem is somewhat narrow in scope. I noticed the authors only cited one past work on this area (presumably, the one that proposed it), and the applications were not cited. While this does certainly seem like an interesting new problem, I am not sure of what its place is within ML research literature. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: 1. Can you motivate this problem a bit more? Has anyone other than [1] tried attacking this problem? 2. Do you have any citations for uses? 3. I understand that IP was the main focus of the paper, but it would be nice to have an argument for the motivations of Min-IP and Max-IP. Do you know any applications they might work for? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: I did not see a limitations section. It would be preferable but is not necessary for this work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their comments. We address your concerns and questions below. > The biggest weakness is that, while the improvement in approximation is significant, I think the problem is somewhat narrow in scope. I noticed the authors only cited one past work on this area (presumably, the one that proposed it), and the applications were not cited. While this does certainly seem like an interesting new problem, I am not sure of what its place is within ML research literature. We thank the reviewer for their comments. It is true that individual preference stability is a new concept (though we would argue a rather natural criterion) that was only introduced in ICML’22. However, there are some connections to prior work that we will highlight. IP stable clustering is closely related to a burgeoning line of work on fairness in machine learning and, relatedly, on individual (rather than global) objectives. In a preliminary version of [1], they formulated the problem as a fair clustering problem (see [2]). Various notions of fairness in clustering have been examined [3, 4] (https://www.fairclustering.com, https://www.fairclustering.com/files/fair-clustering-taxonomy.pdf), and IP stability can be viewed as an alternate definition. We note that the notion of stability is more general than fairness (there may be other reasons to want stability, for example if the actors being clustered are strategic and may try to change clusters if unhappy). In terms of applications, to the best of our knowledge, we do not know any published work that uses this notion explicitly, but we do see potential applications. There are cases when we want to incorporate fairness in clustering, e.g., in bank loan, an applicant would be upset if their loan application gets rejected while another individual with similar features has an approved application. Similar applications can be argued for job listings, school applications, et cetera. Apart from [1], a somewhat similar notion was studied in [5]. In particular, in Sect. 1.1 of [5], it is mentioned that in general sense, for any good clustering notion, if $x \in C$ and $y \notin C$, then $x$ should be substantially closer to $C$ than $y$. [5] studied the case where the term ``substantially closer’’ is emphasized. Formally, a cluster $C$ is $(\alpha,\gamma)$-cluster if $P(x \in C)>\alpha$ and for any $y \notin C$, $d(y,C) \geq \gamma d(x,C)$. A $(\alpha, \gamma)$-clustering is a clustering where each cluster is an $(\alpha,\gamma)$-cluster. This problem becomes tractable when $\alpha > 0, \gamma >3$. Our case is corresponding to $(0,1)$-clustering. In fact, our algorithm finds a $(0, O(1))$-clustering. > I understand that IP was the main focus of the paper, but it would be nice to have an argument for the motivations of Min-IP and Max-IP. Do you know any applications they might work for? Similar to what the community does for fairness, we look into several notions of stability. While we do not know any applications yet, we believe that Min-IP, Max-IP, and more generally f-IP, are natural stability definitions. References: [1] Ahmadi, Saba, Pranjal Awasthi, Samir Khuller, Matthäus Kleindessner, Jamie Morgenstern, Pattara Sukprasert, and Ali Vakilian. "Individual Preference Stability for Clustering." In International Conference on Machine Learning. 2022. [2] Kleindessner, Matthäus, Pranjal Awasthi, and Jamie Morgenstern. "A notion of individual fairness for clustering." arXiv preprint arXiv:2006.04960 (2020). [3] Brubach, Brian, Deeparnab Chakrabarty, John P. Dickerson, Seyed Esmaeili, Matthäus Kleindessner, Marina Knittel, Jamie Morgenstern, Samira Samadi, Aravind Srinivasan, and Leonidas Tsepenekas. "Fairness in Clustering." [4] Chhabra, Anshuman, Karina Masalkovaitė, and Prasant Mohapatra. "An overview of fairness in clustering." IEEE Access 9 (2021): 130698-130720. [5] Daniely, Amit, Nati Linial, and Michael Saks. "Clustering is difficult only when it does not matter." arXiv preprint arXiv:1205.4891 (2012).
Summary: In $\alpha$-individual preference (IP) stable clustering, the average distance between every data point to other points in its cluster must be at most $\alpha$ times the average distance between it to points in any other cluster. This paper gives the first polynomial time $O(1)$-IP stable clustering algorithm in general metrics, which improves [Ahmadi et.al, ICML 2022] 's $O(n)$-IP stable result. The algorithm in this paper also has more interesting features that I appreciate. Firstly, it satisfies an even stronger uniform IP stability guarantee. Precisely, there exists a global parameter $r$, such that for every data point, its average distance to points in its cluster is bounded by $O(r)$ while its average distance to point in any other cluster is at least $O(r)$. Secondly, the k-clustering it finds is also a constant factor approximation to the classical k-center clustering problem, which makes the solution much more meaningful in applications. I think this result shows that k-center clustering can admit IP stability without paying too much. Exciting news! Moreover, the algorithm presented is clean and easy to follow with its analysis building on certain clever observations. Authors also do experiments against [Ahmadi et.al, ICML 2022] to show the practical value of their algorithm. At last, the authors also study other definitions of IP-stable and obtain similar results. Overall, I recommend "accept". Strengths: 1. The first efficient algorithm to compute O(1)-IP stable approximate clustering in general metrics. Additionally, the algorithm has more important features beyond IP stability. The result significantly improves the previous works. 2. The algorithm and analysis are interesting and have some non-trivial insights. 3. The experiments also outperform the baseline. Weaknesses: Both the constants for IP approximation and k-center approximation may be too large. While IP stability is motivated by practical applications, I would expect that the guarantee is not a quite large number such as 2 or 3. The paper's experiment already shows that the worst case may be avoided in practice and I would like to see more discussion in this direction. Anyway, as a theoretical submission, I think this is totally fine and does not lower my rating. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: 1. It is argued that the running time can be improved in Euclidean space. What is the exact running time that you can achieve? What about $\tilde{O}(nkd)$? (a typical running time for Euclidean $k$-clustering.) 2. Have you formally proved that your algorithm returns an $O(1)$-approximation for $k$-center in the main text? I know it can be seen from Algorithm 2 but it is better to provide formal proof and write the exact approximation ratio. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 4 excellent Presentation: 4 excellent Contribution: 4 excellent Limitations: I do not see the potential negative societal impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their comments. Below is our response. > "[...] the constants for IP approximation and k-center approximation may be too large." The focus of this paper is to give a constant factor approximation, improving upon the only O(n)-approximation of the IP-stability clustering problem. For the sake of clarity of the presentation, we did not try to optimize the constants here. > "The paper's experiment already shows that the worst case may be avoided in practice and I would like to see more discussion in this direction." This is indeed an interesting future direction to explore the approximability of IP-stability for instances in practice. A challenge in this direction is to identify the properties that result in better IP-stable clustering guarantees. > It is argued that the running time can be improved in Euclidean space. What is the exact running time that you can achieve? What about O~(ndk) (a typical running time for Euclidean k-clustering.)? We have opted for simplicity and generality to ensure our algorithm applies to any metric space. As stated in Section 3, our algorithm can be naively implemented in $\tilde{O}(n^2T)$ time for any metric space where $T$ is the time to compute a distance between two points. However, we believe that for specialized metrics, such as Euclidean metrics, it is an interesting future question to obtain faster algorithms. For example, one can show that for *constant* dimensional Euclidean space, our algorithm can be made to run in O(nk) time. > Have you formally proved that your algorithm returns an O(1) approximation for k-center in the main text? I know it can be seen from Algorithm 2 but it is better to provide formal proof and write the exact approximation ratio The k-center guarantee is shown in Theorem 3.1. Per the reviewer's suggestion, we will include the exact k-center approximation factor in the final version.
null
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
One-for-All: Bridge the Gap Between Heterogeneous Architectures in Knowledge Distillation
Accept (poster)
Summary: In this paper, the authors propose a new heterogeneous knowledge distillation approach. The core idea is to map the intermediate layer features of the network to a unified logit space to eliminate feature mismatches caused by different structures. The author conducts thorough experiments on distilling between CNN, ViT, and MLP networks. According to the experimental results in the paper, the proposed approach yields promising results. Strengths: 1. The experiments in this paper are comprehensive, considering distillation between various networks with different structures, and conducting experiments on both CIFAR-100 and ImageNet datasets. 2. The proposed method is reasonable, as projecting network features onto a latent space to avoid the alignment issue of distillation between networks with different structures may indeed lead to better results. 3. According to the experimental results, the proposed method achieves good performance in various experiments. Weaknesses: 1. The newly proposed method is very similar to deep supervision in that both involve adding an auxiliary head to the intermediate layer to learn the final output. The only difference is that deep supervision previously learned hard labels, while the proposed method learns soft labels from the teacher. However, this paper does not discuss the differences between this method and deep supervision, including theoretical and experimental results. 2. In some experimental settings, the improvement brought by the proposed method is very limited. And the comparison is not comprehensive. For example, in Table 1 and Table 2, OFD, Review, and CRD's results are missing. 3. In addition, in Table 1, some experiments are combined with FitNet while others are not. Although the authors have provided an explanation, it still feels strange that we cannot conclude that ResNet50's features are more applicable just because it is the most commonly used network. And why ResNet50 is not adopted as the teacher in Table 2? Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: Please address the problem in Weakness Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 2 fair Presentation: 2 fair Contribution: 3 good Limitations: Potential negative societal is not applicable. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > **Weakness 1:** The newly proposed method is very similar to deep supervision **Response to the weakness 1:** Thanks for your suggestions in improving quality of our work. Deep supervision [1,2,3] introduces intermediate supervision during training to mitigate the gradient vanishing/exploding problem in early era of deep learning. Conversely, feature distillation involves transferring knowledge at intermediate layers, aiming to equip the student with a more comprehensive understanding of the teacher's knowledge. Our OFA method falls within this category of approaches. To conduct a comparison between deep supervision method an OFA, we trained a ResNet18 model using deep supervision by replacing the OFA loss with a hard-label cross-entropy loss. We evaluated its performance against that achieved by OFA with a DeiT-T teacher. Notably, the position and number of auxiliary branches used in deep supervision remained consistent with OFA. |Teacher|Student|Method|Top-1| |:-:|:-:|:-:|:-:| |DeiT-T|ResNet18|OFA|71.34| |-|ResNet18|Deepsupervision|70.06| Based on the results above, it is evident that the model trained using our OFA framework outperforms the model trained with deep supervision by a significant margin. This disparity in performance underscores the superiority of our OFA in comparison to deep supervision. [1] Wang, Liwei, et al. "Training deeper convolutional networks with deep supervision." [2] Zhang, Linfeng, et al. "Contrastive deep supervision." [3] Li, Renjie, et al. "A comprehensive review on deep supervision: Theories and applications." > **Weakness 2:** In some experiment improvement is limited. Comparison is not comprehensive. **Response to the weakness 2:** Thanks for your valuable comments. Referring to the results in our main paper, our OFA method exhibits only a slight performance gain over the **second-best** baseline across certain teacher-student combinations. Notably, the second-best result isn't consistently obtained through the same baseline, signaling the challenges associated with generalizing existing methods to the context of heterogeneous KD. In contrast, our OFA method consistently outperforms all other baselines across all scenarios, yielding satisfactory results. This consistently strong performance underscores its generic applicability for cross-architecture KD. The OFD method is tailored for CNN models, leveraging features between Batch Normalization (BN) and ReLU layers for distillation. However, ViT/MLP architectures generally lack such intermediary positions, posing a challenge to the application of OFD. Furthermore, previous works like DIST and DKD have demonstrated their superiority over OFD. Hence, we have opted not to include OFD in our analysis. To assess the performance of Review, we selected two teacher-student pairs and conducted experiments on the ImageNet-1K dataset. The outcomes, as illustrated in the table below, reveal that Review falls short of our OFA methods by a noticeable margin. Notably, Review mandates features with dimensions of (N, C, H, W), where N, C, H, and W denote batch size, the number of channel, height, and width, respectively. Since ViT-generated features do not possess this structure, an "unpatchify" operation is required to transform them. We hypothesize that this transformation contributes to the less-than-ideal results obtained using Review. We intend to incorporate this discussion into our final revision. |Teacher|Student|KD|Review|OFA| |:-:|:-:|:-:|:-:|:-:| |DeiT-T|ResNet18|70.22|70.28|71.34| |ConvNeXt-T|DeiT-T|74.00|68.10|74.41| As for CRD, we have conducted the missed experiments on CIFAR-100, as shown in the following table. And we will include these results to our final revision. |Teacher|Student|CRD| |:-:|:-:|:-:| |Swin-T|ResNet18|77.63| |ViT-S|ResNet18|76.60| |Mixer-B/16|ResNet18|76.42| |Swin-T|MobileNetV2|79.80| |ViT-S|MobileNetV2|78.14| |Mixer-B/16|MobileNetV2|78.15| |ConvNeXt-T|DeiT-T|65.94| |Mixer-B/16|DeiT-T|65.35| |ConvNeXt-T|Swin-P|67.09| |Mixer-B/16|Swin-P|67.03| |ConvNeXt-T|ResMLP-S12|63.35| |Swin-T|ResMLP-S12|61.72| > **Weakness 3:** In Table 1, some experiments are combined with FitNet while others are not. Why ResNet50 is not adopted in Table 2? **Response to the weakness 3:** Thanks for your professional concern. To ensure a more fair comparison, we have chosen the two most competitive baselines, i.e., DKD and DKD, and have integrated them with FitNet to train the student. The outcomes are illustrated in the table below. |Teacher|Student|DKD+FitNet|DIST+FitNet|OFA+FitNet| |:-:|:-:|:-:|:-:|:-:| |ResNet50|DeiT-T|75.60|75.13|76.55| |ResNet50|Swin-N|78.23|77.95|78.64| |ResNet50|ResMLP-S12|78.23|77.71|78.53| The results indicate that DKD and DIST achieve satisfactory performance when integrated with FitNet. Nevertheless, our OFA method surpasses them, thereby highlighting the effectiveness of our approach. We speculate that designs based on traditional 3x3 convolution like ResNet50 have the capability to capture local information, such as texture details, better than ViT/MLP. In the process of "patchify", ViT might overlook these local details. Therefore, when ResNet50 serves as the teacher, FitNet can provide valuable intermediate representations. And we will correct the original statements in Line 244-246 of the main paper. Regarding the selection of ResNet50 as the teacher in Table 2 (experiments conducted on CIFAR-100), we conduct experiments to compare the performances between FitNet and OFA on two heterogeneous teacher-student pairs, i.e., ResNet-DeiT and ResNet-ResMLP. |Teacher|Student|FitNet|OFA|OFA+FitNet| |:-:|:-:|:-:|:-:|:-:| |ResNet50|DeiT-T|75.62|75.88|76.14| |ResNet50|ResMLP-S12|73.67|74.38|74.50| As the results shown in the above table, the integration of OFA and FitNet yields further enhancement in the performance of the student model. We will include the remaining omitted results, where ResNet50 serves as the teacher, and incorporate them into our final revision.
Summary: This paper introduces a new method to distill knowledge between heterogeneous models named OFD-KD. This paper proposes to project the intermediate features into logits for distillation. A new loss function is also introduced in this paper to adaptively enhance the target information. Extensive experiments verify the effectiveness of this method. Strengths: 1. This paper is easy to understand and clearly written. 2. This paper uses CKA to visualize the differences between CNN, VIT, and MLP. 3. Extensive experiments are conducted to verify the effectiveness of this method. Weaknesses: 1. The improvement is relatively minor in ImageNet-1K. The most improvement is around 0.1%-0.3% in Tables 1, 5, and 6. This method does not exhibit a clear advantage over other techniques. 2. The architecture employed in this work has been extensively explored. This form of intermediate logit supervision has been widely used in BYOT, DCM, DKS, and other methods. This study does not offer any significant novelty in the context of distillation. 3. Table 1 fails to compare the latest feature-based distillation method, particularly SemCKD, which is a method dedicated to heterogeneous distillation. Thus, the experimental comparisons presented in this paper lack meaningfulness. The authors should compare with the recent state-of-the-art hint-based methods. 4. The author's use of CKA comparison is unnecessary, given the obvious architectural differences between CNN, MLP, and VIT. The differences between these architectures have been discussed in many works. 5. Please provide the results of OFA during distillation between heterogeneous networks such as VGG, ResNet, ShuffleNet, and MobileNet. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: 1. In Tables 1 and 2, the results of many distillation methods are not as high as the baseline results, please explain your implementation details and why this is the case. 2. Is there any quantitative indicator to prove that the method proposed in this paper really bridges the gap of heterogeneous distillation? Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: See weaknesses and questions. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: # Response to Reviewer R7Hg part (1/2) **Response to the weakness 1:** In Table 5, the improvement when compared to methods tailored for homogeneous architectures, such as ResNet34-ResNet18, is indeed marginal. However, when evaluating heterogeneous pairs, as shown in Table 1, our OFA-KD consistently outperforms the second-best baseline by a range of 0.1% to 0.7%. Notably, it's worth noting that the second-best result is not consistently obtained by the same baseline. Given the intricate nature of cross-architecture KD, where architecture-specific nuances can hinder student learning, previous methods have struggled to consistently achieve satisfactory results. For instance, while DIST achieves the second-best performance across multiple teacher-student combinations, it slightly lags behind our proposed method. Nonetheless, on certain other teacher-student pairs, like the case of Swin-T teacher and ResNet18 student, DIST's performance significantly trails that of OFA. In contrast, OFA consistently attains the top performance, underscoring its applicability as a universal solution for both heterogeneous and homogeneous KD scenarios. **Response to the weakness 2:** Intermediate supervision is widely used in hint-based learning approaches like FitNet. BYOT and DCM's final structures, with the addition of auxiliary classifiers, bear similarities to our OFA. However, our focus is primarily on cross-architecture KD. Given the notable divergence in features learned by heterogeneous models, specialized information filtering method is essential to align features. We aim to emphasize the importance of noting that directly mimicking the feature space of a heterogeneous teacher can lead to suboptimal results. Instead, we find it more effective to transfer mismatched representations into the aligned logits space through the integration of additional exit branches within the student model. We leave the exploration of more efficient modules or alternative spaces for aligning features for future research. In essence, intermediate logits supervision can be viewed as a specialized approach for aligning heterogeneous features. Furthermore, we introduce a novel distillation loss to enhance target information adaptively by introducing a modulating parameter into the original KD loss. Our ablation study validates the efficacy of this design. We will provide a summary of these architecture designs in the related work section, including BYOT/DCM/DKS. **Response to the weakness 3:** Thanks for your valuable comments. Firstly, we would like to clarify the scope of our usage of the term "heterogeneous." In our paper, we specifically consider CNN, ViT, and MLP models as examples of heterogeneous architectures. However, in the context of SemCKD, the definition of heterogeneous models is broader. For instance, SemCKD categorizes models like VGG and ResNet as heterogeneous, whereas our paper treats them as homogeneous CNN models. We have also carried out experiments using SemCKD. Given that SemCKD is tailored for CNN models, we transformed intermediate features of ViT models into the CNN format using an "unpatchify" operation. Our experiments involve two teacher-student pairs: DeiT-T - ResNet18 and ConvNeXt-T - DeiT-T. To facilitate comparison, we present the results of SemCKD alongside KD and OFA on ImageNet-1K. |Teacher|Student|KD|SemCKD|OFA |:-:|:-:|:-:|:-:|:-: |DeiT-T|ResNet18|70.22|70.12|71.34 |ConvNeXt-T|DeiT-T|74.00|71.96|74.41 Through the above outcomes, within the more stringent "heterogeneous" context, SemCKD demonstrates lower performance compared to our OFA. We conjecture that the coarse "unpatchify" operation hinders effective information transfer. If we intend to extend the applicability of SemCKD to encompass generalized cross-architecture KD, a more meticulous design is imperative to achieve semantic calibration. We believe this as an important avenue for future research and plan to incorporate the discussion in our final revision. **Response to the weakness 4:** We employed CKA to illustrate that heterogeneous architectures learn distinct features, as indicated by their CKA similarity. This served as the foundation for devising feature alignment methods in the context of cross-architecture KD, enhancing the clarity and logical flow of the paper. While some existing works delve into different architectures, very few have simultaneously compared all three types. Hence, our analysis is more comprehensive. Furthermore, we haven't listed the CKA analysis as a specific contribution in our paper. **Response to the weakness 5:** Thanks for the valuable suggestions. Considering rebuttal time constraints, we restricted our experimentation to the ResNet50 teacher and MobileNet-v1/VGG-16 student pair. The particular ResNet-MobileNet teacher-student combination has been extensively utilized in numerous related studies, making it convenient to obtain baseline results. And we conducted the ResNet-VGG experiments ourselves to establish the baseline results. Following the recipe in DKD, we trained the student using our OFA and present the results in the following table. |Teacher|Student|KD|OFD|Review|CRD|DKD|DIST|OFA| |:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:| |ResNet50|MobileNet-v1|70.68|71.25|72.56|71.37|72.05|73.24|73.28| |ResNet50|VGG-16|73.96|73.31|74.08|74.02|74.69|74.75|74.88| From the results, the performance of OFA is comparable to the best baseline DIST with a slight accuracy gain of 0.04%. Notably, models mentioned above are considered homogeneous in our paper, as they all belong to the CNN architecture. Our OFA method is primarily tailored for cross-architecture KD and consistently surpasses previous baseline approaches. However, even when both the teacher and student belong to homogeneous architectures, OFA still manages to achieve competitive performance, underscoring its effectiveness. ### ***The part (2/2) can be found in "Author Rebuttal" at the top of this page.*** --- Rebuttal Comment 1.1: Title: Response Comment: Thank you for your response. The rebuttal address most of my concerns. I have raised my score. I would be very glad to see the author release their code if this paper is accepted. --- Reply to Comment 1.1.1: Title: Response to Reviewer R7Hg Comment: Dear Reviewer R7Hg, We sincerely appreciate you taking the time to review our paper and response, and contributing to improve this paper. We will carefully follow reviewer's advice to incorporate all the addressed points in the updated version. And we will release the code if our paper is accepted. Best, Authors
Summary: This paper tackles the problem of cross-architecture distillation, that is, the teacher and the student in KD are of different model architectures. By using centered kernel alignment, the authors observe that features learned by models of different architectures shows significant feature divergence, indicating that previous hint-based methods are not suit for this task. To bridge this gap, the authors propose a simple yet effective one-for-all KD framework called OFA-KD. Specifically, they project intermediate features into an aligned latent space to discard architecture-specific information. And an adaptive target enhancement scheme is proposed to prevent the student from being disturbed by irrelevant information. The authors conduct experiments on CIFAR-100 and ImageNet-1k benchmarks with CNN, ViT and MLP architectures to demonstrate the effectiveness of the proposed method. Strengths: Motivation: The motivation is clear. Cross-architecture distillation provides more feasible options for teacher models, as it may not always be possible to find a superior teacher model with a homogeneous architecture. Originality: Learning in an aligned latent space is the first application in cross-architecture distillation, and the adaptive target information enhancement loss is novel. Quality: Written of the paper is good. Sufficient experiments and ablation studies with other methods and the proposed variants are implemented. Clarity: The paper consists of text explanations and an illustration of the OFA-KD framework and the proposed loss. Significance: The paper solves the problem of cross-architecture distillation, expanding feasible options for teacher models in practice, and improves the accuracy of distilled student models. Weaknesses: 1. In the CKA analysis, it seems that when comparing features of models of the same architecture, the authors just using features of one model in both x-axis and y-axis, as the corresponding heatmaps are symmetry. If using two models, such as ResNet18 vs. ResNet34 or two ResNet18 trained with different initialization, would the results be different? 2. The authors propose to using an aligned latent space for cross-architecture distillation, and adopting the logits space as a special instance. I wonder whether there are any other possible choices of the space, and what will happen when using them. 3. Introduction of the adaptive target information enhancement loss is a bit complicated. Maybe it is possible to simplify the notations. 4. What is the principle of designing the branches, such as its architecture and layer number? Have the authors tried other branch designs? Additionally, there are some grammar mistakes and typos: - line 192, "slow" -> "slowly" - line 193, "enhance" -> "enhancement" - line 255, "additional" -> "addition" - line 257, "reports" -> "report" - line 299, "strengthen" -> "strength" - line 311, "methods" -> "method" - line 339, "disitllation" -> "distillation" Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: Please refer to "Weaknesses" Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: Yes. Limitations and border impact are discussed in the conclusion. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > **Weakness 1:** In the CKA analysis, it seems that when comparing features of models of the same architecture, the authors just using features of one model in both x-axis and y-axis, as the corresponding heatmaps are symmetry. If using two models, such as ResNet18 vs. ResNet34 or two ResNet18 trained with different initialization, would the results be different? **Response to the weakness 1:** Thank you for your valuable suggestion. We have expanded our CKA analysis to encompass various model architectures, and presented these additional results in the PDF uploaded in "Author Rebuttal" section. We will also incorporate these results into our supplementary material. To conduct CKA analysis, specifically, we use ResNet18 and ResNet34 as CNN models, Swin-Small and Swin-Base as ViT models, Mixer-B/16 and ResMLP-S12 as MLP models. The results reveal that homogeneous models, such as a ResNet18 model and a ResNet34 model, exhibit similar feature learning at comparable positions within the model. Conversely, heterogeneous models continue to manifest distinct feature learning patterns. This corroborates the conclusions drawn in our main paper. > **Weakness 2:** The authors propose to using an aligned latent space for cross-architecture distillation, and adopting the logits space as a special instance. I wonder whether there are any other possible choices of the space, and what will happen when using them. **Response to the weakness 2:** The utilization of an aligned latent space aims to mitigate the adverse effects of disparate information present in the features learned by divergent teacher and student models. Therefore, the guiding principle in selecting an aligned latent space is to retain shared information while discarding irrelevant details. The logits space satisfies these prerequisites and is straightforward to implement, thus we have adopted it as the aligned latent space in our experiments. While we acknowledge that there might exist more efficient latent spaces for cross-architecture KD. For example, the use of manifold space as the latent space [1], which exclusively compares the relationships among learned features while remaining insensitive to the absolute feature values. However, crafting an optimal latent space remains a challenging task, given the absence of well-defined principles for measurement. Consequently, we consider this issue a subject for future research exploration. [1] Hao, Zhiwei, et al. "Learning efficient vision transformers via fine-grained manifold distillation." Advances in Neural Information Processing Systems 35 (2022): 9164-9175. > **Weakness 3:** Introduction of the adaptive target information enhancement loss is a bit complicated. Maybe it is possible to simplify the notations. **Response to the weakness 3:** Thank you for your suggestion. We have simplified the notations and incorporated them into our next version. > **Weakness 4:** What is the principle of designing the branches, such as its architecture and layer number? Have the authors tried other branch designs? **Response to the weakness 4:** We initially outline our approach to determining the block number within each branch. In our experimental setup, we partition all models into four stages. For pyramid architectures, this division is straightforward (four down-sampling layers). However, for models like DeiT, we adopt the division scheme used in the Swin Transformer architecture. For instance, DeiT-T and Swin-T both consist of 12 blocks. Thus, we divide DeiT-T into four stages, each comprising 2, 2, 6, and 2 blocks, mirroring the structure of Swin-T. Regarding the branch architecture, we adopt depth-wise convolutional blocks for CNN models and vision transformer blocks for ViT and MLP models. This design is rooted in our belief that a mismatch between branch and backbone architectures could hinder student performance. To verify this assumption, we conducted experiments comparing various branch architectures. The results, as shown in the table below, underscore that homogeneous pairings of branch and backbone outperform heterogeneous pairings. Additionally, our supplementary material presents PyTorch-style pseudocode for constructing branches. | Teacher | Student | Branch architecture | Top-1 | | :--------: | :------: | :-----------------: | :---: | | DeiT-T | ResNet18 | CNN | 71.34 | | DeiT-T | ResNet18 | ViT | 70.82 | | ConvNeXt-T | DeiT-T | CNN | 74.27 | | ConvNeXt-T | DeiT-T | ViT | 74.41 | > **Weakness 5:** Additionally, there are some grammar mistakes and typos: **Response to the weakness 5:** Thank you for thoroughly reviewing our submission and pointing out the mistakes. We have taken great care to address these issues in our next version.
Summary: This paper first demonstrates that there is significant feature divergence of the learned features between heterogeneous teacher and student models, which is a scenario rarely explored in previous knowledge distillation community. And the authors point out that the hint-based methods are ineffective in this cross-architecture distillation. They propose OFA-KD to improve the distillation performance between heterogeneous architectures. It first projects intermediate features into an aligned latent space (the logits space). In addition, they introduce an adaptive target enhancement scheme to prevent the student from being disturbed by irrelevant information. The extensive experiments with various architectures demonstrate the superiority of the OFA-KD framework. Strengths: This paper studies KD with different architectures. It is an interesting attempt to build a generic framework for distilling students with arbitrary mainstream model architectures, i.e., CNN, ViT, and MLP. Experiment results in both the main paper and the supplementary material demonstrate the necessity of doing cross-architecture distillation. The performance improvement of OFA is remarkable. On ImageNet-1K, the maximum improvement is 0.8\%, and on CIFAR-100, the maximum improvement is 5.0\%, even compared with the most recent KD baselines, DIST and DKD. Weaknesses: The additional branches will increase the training cost. I think the authors should give more analyses on this. The reported results of using res50 as the teacher on ImageNet-1K are obtained by using both FitNet and OFA-KD (Table 1). I think this is not a fair comparison. For example, what’s the result of FitNet + DKD/DIST? Missing some references also adopting multi-branch architecture: [A] Be your own teacher: Improve the performance of convolutional neural networks via self distillation, ICCV 2019 [B] Distillation-based training for multi-exit architectures, ICCV 2019 [C] MSD: Multi-Self-Distillation Learning via Multi-classifiers within Deep Neural Networks, arXiv 2019 Technical Quality: 3 good Clarity: 3 good Questions for Authors: The OFA result (ViT-B teacher) in Table 6 of the main paper seems inconsistent with that in Figure 5 of the supplementary material. The accuracy gain of the res50 student is 1.19\% in the main paper, while in the supplementary material, the accuracy gain is 1.47\%. Please check it. As there are additional branches introduced during the training procedure, it requires more computational resources to train the same student model than using traditional KD methods. Could the authors provide more details about the branches to illustrate the extra resources consumption? For example, I am interested in the number parameters and the FLOPs of the additional branches. Why does the paper choose to use depth-width convolutional layers for branches in CNN models, while ViT blocks are used in ViTs and MLPs? There are so many hyperparameters such as the scaling factor, the clip grad norm, and the \gamma value. How to choose these hyperparameters in practice? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Yes. Limitations and border impact are discussed in the conclusion. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > **Weakness 1:** The additional branches will increase the training cost. > > **Question 2:** Could the authors provide more details about the branches **Response to the weakness 1 & question 2:** Thanks for your valuable comments. The inclusion of extra branches inevitably demands more computational resources during training. To mitigate this concern, we have employed a streamlined design for branch architecture. For example, in CNN models, we've reduced the number of channels in depth-wise convolution layers within the branches, and for ViT and MLP models, we've introduced patch merging blocks before vision transformer blocks within the branches to reduce the number of tokens that need to be processed. We've chosen three distinct student architectures, i.e., ResNet18, DeiT-T, and ResMLP-S12, and conducted a comparison on the number of parameters and FLOPs between their backbones and exit branches. As depicted in the table below, considering the additional FLOPs introduced by exit branches, these branches demonstrate enhanced accuracy improvement in comparison to the backbone architectures. This improvement results in only a marginal increase in training cost, thereby contributing minimal augmentation to overall training expenses. Moreover, given that the primary objective of KD is to compress a pre-trained model for easier deployment, a slight elevation in training cost remains acceptable. Our OFA-KD approach enhances the performance of student models while incurring no additional inference cost. | Teacher | Student | Student Params | Student FLOPs | Branch Params | Branch FLOPs | | :------: | :--------: | :------------: | :-----------: | :-----------: | :----------: | | DeiT-T | ResNet18 | 11.69M | 1.83G | 3.04M | 0.08G | | ResNet50 | DeiT-T | 5.72M | 1.26G | 4.48M | 0.19G | | ResNet50 | ResMLP-S12 | 15.35M | 3.01G | 16.32M | 0.74G | > **Weakness 2:** result of FitNet + DKD/DIST? **Response to the weakness 2:** FitNet appears to be particularly effective as a knowledge distillation approach when utilizing ResNet50 as the teacher and ViT/MLP as the student. However, these promising outcomes are not observed with other model combinations. As a result, we exclusively present results obtained by combining OFA and FitNet for these specific teacher-student pairs. To ensure a fair comparison, we proceed to conduct experiments merging FitNet with DKD and DIST for these models, and subsequently report their respective performances in the table below. | Teacher | Student | DKD+FitNet | DIST+FitNet | OFA+FitNet | | :------: | :--------: | :--------: | :---------: | :--------: | | ResNet50 | DeiT-T | 75.60 | 75.13 | 76.55 | | ResNet50 | Swin-N | 78.23 | 77.95 | 78.64 | | ResNet50 | ResMLP-S12 | 78.23 | 77.71 | 78.53 | As the results demonstrate, even though DKD and DIST also yield improved performance when integrated with FitNet, our OFA-KD method consistently outperforms them. This underscores the efficacy of our approach. We will incorporate the new results obtained through FitNet + DKD/DIST into our next version." > **Weakness 3:** Missing some references **Response to the weakness 3:** Thank you for your valuable suggestion. We have appropriately integrated these references into the draft of our next version. > **Question 1:** the inconsistent result. **Response to the question 1:** Thank you for your thorough review of our paper. There was an error in our main paper regarding the accuracy gain, which should be 1.47%. We have rectified this mistake in the revised version. > **Question 3:** principle of choosing branches **Response to the question 3:** Our decision to adopt the CNN branch architecture for CNN models and the ViT branch architecture for ViT/MLP models is grounded in our belief that a heterogeneous pairing of branches and backbones could potentially hinder student performance. In order to substantiate this perspective, we conduct a series of experiments aimed at evaluating the outcomes of employing diverse branch and backbone configurations. The results below show that a homogenous combination of branch and backbone consistently yielded superior performance. As a result, we embraced this unified configuration, aligning with our objective of improving student model performance. Furthermore, the incorporation of depth-wise convolutional layers within the CNN branch architecture serves to mitigate the additional training cost attributed to branches. These convolutional layers have also demonstrated their effectiveness in several related studies [1,2]. | Teacher | Student | Branch architecture | Top-1 | | :--------: | :------: | :-----------------: | :---: | | DeiT-T | ResNet18 |CNN| 71.34 | | DeiT-T | ResNet18 |ViT| 70.82 | | ConvNeXt-T | DeiT-T |CNN| 74.27 | | ConvNeXt-T | DeiT-T |ViT| 74.41 | [1] Be your own teacher: Improve the performance of convolutional neural networks via self distillation [2] Task-oriented feature distillation > **Question 4:** How to choose these hyperparameters **Response to the question 4:** In our ablation study, we evaluated the impact of these hyperparameters. Notably, we discovered that the scaling factor can be disregarded by simply setting it to 1, as this adjustment is sufficient to achieve satisfactory results. As for the clip grad norm, the best result is obtained when the value is set to 5, which is a common setting used by many other related works. So we can simply adopt the setting of clip_grad=5 for all combinations of teacher and student. While there isn't a universally applicable setting for the parameter $\gamma$, we can determine its optimal value using a validation set, considering that the other hyperparameters remain fixed.
Rebuttal 1: Rebuttal: # Response to all reviewers We thank all the reviewers for their elaborate and constructive feedback. Their valuable suggestions help improve the quality of our paper greatly. ### **Response to Reviewer R7Hg part (2/2)** **Response to the question 1:** In our experiments, we primarily utilize the training script from the timm library to train the student models, while also implementing the KD loss of the baselines following the DKD approach. However, due to the distinct nature of features learned by heterogeneous architectures, certain differences arise. For example, the feature shape of a CNN is of size (N, C, H, W), while that of a ViT is denoted as (N, L, D), where N indicates the batch size, C, H, and W refer to the channel, height, and width of the CNN model's feature map respectively, and L and D denote the patch number and embedding dimension of the ViT/MLP model's feature map. To apply previous feature distillation methods designed for CNN models, we need to transform the feature map of the ViT/MLP model into the CNN-style (shape) feature through an "unpatchify" operation. However, we speculate that this operation might be overly simplistic and overlooks intrinsic features specific to the ViT/MLP model's learned features. Consequently, some feature distillation methods yield suboptimal results compared to the baseline. Furthermore, we believe that more effective approaches can be developed to adapt existing feature distillation methods to cross-architecture KD, but we leave this question for future research. Generally speaking, the performances from logits-based algorithms are superior to those from hint-based algorithms. This observation reinforces our conclusion that for heterogeneous architectures, directly learning the intermediate features of the teacher might lead to sub-optimal results. However, there are a few instances where logits-based methods fall short of the baseline (student trained from scratch) performance. This discrepancy could potentially be attributed to the distinct inductive biases of CNN and ViT architectures, which drive them toward diverse destinations and result in dissimilar distributions. As discussed in Lines 173-175 of our main text, this phenomenon motivates us to propose the adaptive target information enhancement KD loss. **Response to the question 2:** We mainly adopt the top-1 accuracy as the measurement to evaluate different methods in our paper. Existing methods cannot achieve consistent top-1 accuracy improvement across all tested combinations of teacher and student. However, our OFA-KD outperforms those baselines consistently in terms of top-1 accuracy. This is an evidence that demonstrating OFA-KD is a generic method for cross-architecture distillation. In this paper, we predominantly employ top-1 accuracy as the evaluation metric to assess various methods. The previous approaches do not uniformly achieve improvements in top-1 accuracy across all tested teacher-student combinations. Conversely, our OFA-KD consistently outperforms these baselines in terms of top-1 accuracy. This consistent superiority stands as evidence that underscores OFA-KD's effectiveness as a method for cross-architecture distillation. While we haven't identified another quantitative indicator at present, this area presents an intriguing avenue. Exploring more comprehensive ways to assess the performance of KD methods is a topic we view as a potential focus for future research. Pdf: /pdf/3c83c75bb9063a9f029ad74738854751eb1248e6.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
FLSL: Feature-level Self-supervised Learning
Accept (poster)
Summary: This paper proposes Feature-level Self-supervised Learning (FLSL) to handle dense prediction downstream tasks. Specifically, the authors employ the transformer for joint embedding and clustering and construct the objectives from the mean-shift and k-means perspectives. Experiments show that FLSL yields significant improvements in dense prediction tasks. Strengths: 1. The paper is well-motivated and the proposed method seems to work well on dense prediction tasks. 2. The analysis of the connection between mean-shift clustering and SA sounds reasonable. The authors analyze the relationship between ViT and clustering from a new perspective. 3. Experimental results on detection and segmentation show significant improvements and demonstrate the effectiveness of FLSL. Weaknesses: 1. The relation between ADCLR and FLSL is not clear. I see both ADCLR and FLSL use cross-attention to learn patch-level information. So, what's the main difference between ADCLR and the inter-view objective? 2. ''feature-level'' looks somewhat misleading. For me, we typically divide the SSL into ''feature-wise'' (Barlow Twins [1], ZeroCL [2], ARB [3], VICReg [4]) and ''instance-wise'' methods (SimCLR [5], Moco [6]) by the objectives on the different dimension. The proposed method is more like a cluster-level method (SwAV [7]). [1] Zbontar J, Jing L, Misra I, et al. Barlow twins: Self-supervised learning via redundancy reduction[C]//International Conference on Machine Learning. PMLR, 2021: 12310-12320. [2] Zhang S, Zhu F, Yan J, et al. Zero-cl: Instance and feature decorrelation for negative-free symmetric contrastive learning[C]//International Conference on Learning Representations. 2021. [3] Zhang S, Qiu L, Zhu F, et al. Align representations with base: A new approach to self-supervised learning[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022: 16600-16609. [4] Bardes A, Ponce J, LeCun Y. Vicreg: Variance-invariance-covariance regularization for self-supervised learning[J]. arXiv preprint arXiv:2105.04906, 2021. [5] Chen T, Kornblith S, Norouzi M, et al. A simple framework for contrastive learning of visual representations[C]//International conference on machine learning. PMLR, 2020: 1597-1607. [6] He K, Fan H, Wu Y, et al. Momentum contrast for unsupervised visual representation learning[C]//Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2020: 9729-9738. [7] Caron M, Misra I, Mairal J, et al. Unsupervised learning of visual features by contrasting cluster assignments[J]. Advances in neural information processing systems, 2020, 33: 9912-9924. Technical Quality: 3 good Clarity: 3 good Questions for Authors: See weakness. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: No clearly visible limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: \ __Weaknesses:__ __1.__ _The relation between ADCLR and FLSL is not clear. I see both ADCLR and FLSL use cross-attention to learn patch-level information. So, what's the main difference between ADCLR and the inter-view objective?_ Thanks for raising this question. The main difference between ADCLR and the FLSL inter-view objective is that ADCLR leverages a specially designed cross-attention (CA) between the pseudo CLS tokens (constructed from local crops) and patch tokens to retain the local information throughout the transformer, while FLSL inter-view objective encourages the consistent representations of positive clusters automatically determined via CA between tokens from student and teacher views. More details of the usage of CA in FLSL and ADCLR are provided below. * The CA in ADCLR occurs between "pseudo" CLS (pCLS) tokens and patch tokens. The "pseudo" CLS tokens are constructed with the instance-level representations of several small local crops via a dedicated projector, while the CA in FLSL occurs between the output tokens of student ViT and teacher ViT. * The CA in ADCLR is uni-directional, i.e., from one pCLS token to patch tokens plus the pCLS itself, while the attention function of patch tokens does not consider pCLS tokens at all. In addition, there is no interaction among different pCLS tokens, whereas the CA in FLSL follows the common CA definition. * The CA in ADCLR occurs at every layer in ViT, while the CA in FLSL only occurs at the end of the two ViTs. We will include this discussion in the appendix. \ __2.__ _"feature-level" looks somewhat misleading. For me, we typically divide the SSL into "feature-wise" (Barlow Twins [1], ZeroCL [2], ARB [3], VICReg [4]) and "instance-wise" methods (SimCLR [5], Moco [6]) by the objectives on the different dimension. The proposed method is more like a cluster-level method (SwAV [7])._ Thanks for raising this question. The "level" in our paper refers to the semantic level, at which SSL operates. In this sense, all the above-mentioned methods can be categorized as instance-level since those methods learn meaningful representation for a whole image. FLSL, on the other hand, learns a semantically meaningful representation of a cluster of features, which is determined via mean-shift clustering at the patch/feature-level. Hence, we coined FLSL for our proposed method. We will clarify this further in the introduction section. --- Rebuttal Comment 1.1: Title: Official Comment by Reviewer taUM Comment: Thank the authors for the detailed response. I tend to accept this submission.
Summary: This paper tackles self-supervised representation learning and primarily forces on learning representations for dense downstream prediction tasks, such as object detection and instance segmentation. To improve upon prior work, the main idea of the paper consists of two parts: (1) the paper leverages the underlying mean shift clustering process via the attention mechanisms of ViTs. In particular, the presented method relies on self-attention and cross-attention layers to achieve intra-level feature clustering. (2) In addition, the presented method takes a k-means perspective to achieve inter-level feature clustering.. The paper demonstrates the effectiveness of the approach via extensive experiments for dense prediction tasks, outperforming prior art for object detection and instance segmentation on MS-COCO. Strengths: - The paper makes an interesting observation by connecting the attention mechanism (self-attention and cross-attention layers in particular) to mean shift clustering. This results in a relatively clean implementation (see figure 2 for an overview). - The overall idea is intuitive: 1. The feature representations that belong to a certain cluster are close to the cluster representative (center) and pushed away from the representatives of other clusters. 2. The cluster representatives of the positive areas are pulled together (positives) This is also reflected in the final loss function and can be implemented via a clean implementation. The pseudo-code in the supplementary materials is very insightful. - The presented approach demonstrates strong performance on multiple downstream tasks and datasets. MS-COCO is used for object detection and instance segmentation, UAVDT is used for object detection from UAV platforms, and DAVIS is used for video instance segmentation. The proposed method consistently outperforms DINO (a strong baseline) in these cases. - The paper contains ablations about the loss function and impact of the number of clusters K. The supplementary also includes additional details about the training setups. - Overall, the paper is well-written Weaknesses: - A few comparisons or discussions with related works are missing. For instance, DeepCluster [a, b], PCL [c], and CDL [d] show similarities with the presented approach as they also leverage clustering. A comparison with CDL would be the most interesting as it relies on instance level and group level losses. - The presented approach is slightly more complex than prior work (DINO). In particular, the presented approach requires additional attention layers. As the architecture differs from conventional methods due to the introduction of the self-attention and cross-attention layers, what is the additional computational cost? This information is currently not present in the paper. - It’s also not clear how robust the method is towards certain dataset biases (e.g., object-centric datasets or imbalanced datasets). The approach relies on a uniform prior over the clusters, which is valid for the ImageNet dataset. However, prior works have been able to pretrain on uncurated datasets (e.g., SEER [e]). It’s currently not clear if this is also possible with the proposed method. It would be valuable to include experiments for COCO pretraining. [a] Caron et al., Deep clustering for unsupervised learning of visual features, ECCV 2018. [b] Caron et al., Unsupervised pre-training of image features on non-curated data, ICCV 2019. [c] Li et al., Prototypical contrastive learning of unsupervised representations, ICLR 2021. [d] Wang et al., Unsupervised Feature Learning by Cross-Level Instance-Group Discrimination, CVPR 2021. [e] Goyal et al., Self-supervised Pretraining of Visual Features in the Wild, 2021. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: How well does pretraining work on non-curated datasets, like the COCO or OpenImages datasets? What is the additional computational cost compared to DINO? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: The paper briefly mentions the limitations near the end of the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: \ __Weaknesses:__ __1.__ _A few comparisons or discussions with related works are missing. For instance, DeepCluster [a, b], PCL [c], and CDL [d] show similarities with the presented approach as they also leverage clustering. A comparison with CDL would be the most interesting as it relies on instance level and group level losses._ Thanks for the suggestion. In the early version of this paper, we had a subsection reviewing the related works in deep clustering, including the above-mentioned ones [a,b,c,d]. However, due to page limit and the scope of the paper, we had to omit them from the submitted version. We believe the connection between FLSL and deep clustering is essential, and we will add the deep clustering subsection back to related work. \ __2.__ _The presented approach is slightly more complex than prior work (DINO). In particular, the presented approach requires additional attention layers. As the architecture differs from conventional methods due to the introduction of the self-attention and cross-attention layers, what is the additional computational cost? This information is currently not present in the paper._ The per-epoch training time of FLSL on ViT-S/16 is 1.19x longer than DINO and is on par with SelfPatch, which is 1.21x longer than DINO, under the same model and hardware configuration. We will include this discussion in the main paper. \ __3.__ _It’s also not clear how robust the method is towards certain dataset biases (e.g., object-centric datasets or imbalanced datasets). The approach relies on a uniform prior over the clusters, which is valid for the ImageNet dataset. However, prior works have been able to pretrain on uncurated datasets (e.g., SEER [e]). It’s currently not clear if this is also possible with the proposed method. It would be valuable to include experiments for COCO pretraining._ Thanks for the suggestion. We have the following observations regarding the robustness of FLSL towards dataset biases. * First, ImageNet at the feature/cluster-level can be viewed as uncurated. ImageNet is curated at the instance-level, i.e., balanced for each class. However, ImageNet is single-labelled, and we do not know what the distribution of "class" is when it is on feature/cluster-level or sub-image level, and the number of classes might be way more than 1K. For example, for an image labeled as “hen”, it may contain many objects and stuff alongside a small hen, and some of the objects and stuff in the image may be out of class vocabulary. In this sense, ImageNet can be viewed as an uncurated dataset at the feature/cluster-level (similar to COCO). As shown in the AAS visualization in the general rebuttal pdf file, the inputs are all COCO-like images from ImageNet-1K, FLSL captures non-label-related objects/stuff in the images, which results in more content-aligned AAS, while DINO mostly singles out label-related features and renders the tokens from the rest of the image correlated to each other. * Second, a uniform prior is a relatively safe choice. For imbalanced dataset, paper [1] shows that a uniform prior is a safer choice as the performance drop as a result of a uniform prior on an imbalanced dataset is much smaller than that of a non-uniform prior (-1.0 vs. -7.2). Thus, for a dataset with agnostic class distribution as in our case, we adopt a uniform prior. We will include the discussion above in the appendix. Due to restricted time and limited computational resource, we are not able to provide COCO pretrained results for the moment. We will provide these results in the appendix if time permits. [1] Assran et al., The hidden uniform cluster prior in self-supervised learning, arXiv preprint, 2022. --- Rebuttal Comment 1.1: Title: Questions after rebuttal Comment: I thank the authors for providing the rebuttal. I have 2 additional remarks: 1. As the approach takes ~20% longer during pretraining, I believe it makes sense to include 2 additional baselines: (1) increasing the number of layers in the backbone of DINO by ~20%; (2) increasing the training time of DINO by ~20%. 2. While I appreciate the images in the rebuttal, additional pretraining results on COCO would show that the presented approach can be applied to uncurated datasets (also see prior works mentioned in my original review). In addition, were the presented images randomly selected? --- Reply to Comment 1.1.1: Comment: __1.__ _As the approach takes ~20% longer during pretraining, I believe it makes sense to include 2 additional baselines: (1) increasing the number of layers in the backbone of DINO by ~20%; (2) increasing the training time of DINO by ~20%._ Thanks for your suggestion. For a fair comparison, existing works often compare the performance of different methods using the same architecture, e.g. ViT-S/16. Therefore, considering time efficiency, instead of modifying the model we still compare with DINO ViT-S/16 trained with 300 epochs, but we will report the performance of FLSL ViT-S/16 trained for 250 epochs (i.e., 250 * 1.2 = 300). The training and evaluation on the downstream tasks may take a couple of days. \ __2.__ _While I appreciate the images in the rebuttal, additional pretraining results on COCO would show that the presented approach can be applied to uncurated datasets (also see prior works mentioned in my original review). In addition, were the presented images randomly selected?_ As the pre-training on COCO can be demanding, e.g., hyperparameter tuning might be necessary due to the difference from ImageNet, and extra time is needed to evaluate on the downstream tasks. We will choose lighter model (ViT-S/16) and smaller number of epochs (200) to show the capability of FLSL on uncurated dataset. Hopefully we will get a decent result by the end of discussion period and report to you. As stated in the figure caption, images are randomly sampled from the ImageNet. --- Reply to Comment 1.1.2: Comment: __1.__ _As the approach takes ~20% longer during pretraining, I believe it makes sense to include 2 additional baselines: (1) increasing the number of layers in the backbone of DINO by ~20%; (2) increasing the training time of DINO by ~20%._ \ Per our previous discussions, we conducted FLSL training using 250 epochs, aligning the training duration with DINO's 300-epoch training schedule. The downstream performance on COCO and ADE20K are reported below. $~~~~~~~~~~~~~~~~~~~$ Table 1. MASK R-CNN ON COCO Object Detection & Segmentation$~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~$ \ | Method - #epoch | AP_bbox | AP_bbox50 | AP_bbox75 | AP_mk $~~~~$ | AP_mk50 | AP_m70 |\ | DINO - 300 $~~~~~~~~~~$ | 40.8 $~~~~~~$ | 63.4 $~~~~~~~~~$ | 44.2 $~~~~~~~~~~~$| 37.3 $~~~~~~~~$ | 59.9 $~~~~~~~$ | 39.5 $~~~~~$ |\ | FLSL - 250 $~~~~~~~~~~~$ | 44.7 $~~~~~~$ | 65.8 $~~~~~~~~~$ | 48.0 $~~~~~~~~~~~$| __40.9__ $~~~~~~~~$ | __64.9__$~~~~~~~~$ | 43.9 $~~~~~$ |\ | FLSL - 300 $~~~~~~~~~~~$ | __44.9__ $~~~~~~$ | __66.1__ $~~~~~~~~~$ | __48.1__ $~~~~~~~~~~~$| 40.8 $~~~~~~~~$ | 64.7 $~~~~~~~$ | __44.2__ $~~~~~$ | \ Table 2. Semantic FPN ON ADE20K Semantic Segmentation \ | Method - #epoch |$~~~~$ aAcc $~~~$ |$~~~~$ MIoU$~~~~$ |$~~~~$ mAcc$~~~~~$ | \ | DINO - 300 $~~~~~~~~~$ | 79.0 $~~~~~~~$ | 38.3 $~~~~~~~~~$ | 47.1 $~~~~~~~~~~~$| \ | FLSL - 250 $~~~~~~~~~~~$ | 81.37 $~~~~~$ | __42.94__ $~~~~~~~$ | 54.43 $~~~~~~~~~$| \ | FLSL - 300 $~~~~~~~~~~~$ | __81.47__ $~~~~~$ | 42.91 $~~~~~~~$ | __55.06__ $~~~~~~~~~$| Even with the same training duration, FLSL utilizing 250 epochs still outperforms DINO-300. We will include this result in the appendix. \ __2.__ _While I appreciate the images in the rebuttal, additional pretraining results on COCO would show that the presented approach can be applied to uncurated datasets (also see prior works mentioned in my original review). In addition, were the presented images randomly selected?_ \ Following the settings in [a], we conducted the experiments to showcase FLSL’s performance on uncurated datasets such as COCO. The results are reported in Table 3 below, alongside the existing COCO-pretrained methods utilizing Mask R-CNN RN50 FPN. FLSL delivers improved performance even with a shorter training schedule (400 epochs), and achieves even better performance with 800 epochs. Please note that due to time constraints we used the default hyperparameters as pretraining on ImageNet-1K. It turns out the default hyperparameters are rather robust and yield decent results of pretraining on COCO. We will include this result in the appendix to demonstrate FLSL’s performance on the uncurated dataset. \ $~~~~~~~~~~~~~~~~~~~~~~~~~~~~~$ Table 3. Object Detection and Instance Segmentation Fine-tuned on COCO.$~~~~~~~~~~~~~~~~~~~~~~~~~~~~~$ \ | Method $~~~$|Backbone | Data | #epoch | AP_bbox | AP_bbox50 | AP_bbox75 | AP_mk $~~~~$ | AP_mk50 | AP_m70 |\ | SimCLR $~~~$ |Backbone|COCO|$~~~~$800$~~~~$| 37.0 $~~~~~~$ | 56.8 $~~~~~~~~~$ | 40.3 $~~~~~~~~~~~$| 33.7 $~~~~~~~~$ | 53.8 $~~~~~~~$ | 36.1 $~~~~~$ |\ | DenseCL $~~$|ResNet50|COCO|$~~~~$800$~~~~$| 39.6 $~~~~~~$ | 59.3 $~~~~~~~~~$ | 43.3 $~~~~~~~~~~~$| 35.7 $~~~~~~~~$ | 56.5 $~~~~~~~$ | 38.4 $~~~~~$ |\ | BYOL $~~~~~~$ |ResNet50|COCO|$~~~~$800$~~~~$| 39.5 $~~~~~~$ | 59.3 $~~~~~~~~~$ | 43.2 $~~~~~~~~~~~$| 35.6 $~~~~~~~~$ | 56.5 $~~~~~~~$ | 38.2 $~~~~~$ |\ | ORL $~~~~~~~~$ |ResNet50|COCO|$~~~~$800$~~~~$| 40.3 $~~~~~~$ | 60.2 $~~~~~~~~~$ | 44.4 $~~~~~~~~~~~$| 36.3 $~~~~~~~~$ | 57.3 $~~~~~~~$ | 38.9 $~~~~~$ |\ | FLSL $~~~~~~~$ |ViT-S/16$~~~$|COCO|$~~~~$400$~~~~$| 40.9 $~~~~~~$ | 64.7 $~~~~~~~~~$ | 43.9 $~~~~~~~~~~~$| 38.0 $~~~~~~~~$ | 61.4 $~~~~~~~$ | 39.9 $~~~~~$ |\ | FLSL $~~~~~~~$ |ViT-S/16$~~~$|COCO|$~~~~$800$~~~~$| 41.7 $~~~~~~$ | 64.7 $~~~~~~~~~$ | 45.5 $~~~~~~~~~~~$| 38.4 $~~~~~~~~$ | 62.0 $~~~~~~~$ | 41.0 $~~~~~$ | \ \ [a] Xie, J., Zhan, X., Liu, Z., Ong, Y.S. and Loy, C.C., 2021. Unsupervised object-level representation learning from scene images. Advances in Neural Information Processing Systems, 34, pp.28864-28876.
Summary: Current self-supervised learning (SSL) methods, including SimCLR, DINO, VICReg, and MOCOv3, focus mainly on instance-level representations, limiting their use in tasks like object detection and segmentation. To overcome this, a new two-level feature clustering SSL method named Feature-Level Self-supervised Learning (FLSL) has been introduced, which uses the mean-shift clustering process of Vision Transformers to improve semantic cluster representations. Experimentally, FLSL outperforms existing SSL methods in dense prediction tasks, delivering impressive results on multiple benchmarks, notably MS-COCO, UAVDT, and DAVIS 2017 datasets. Strengths: 1.It introduces the mean-shift and k-means for the pre-training of dense predictions. 2. The performance is improved with the proposed method. Weaknesses: 1. The idea of considering similar pixels or patches as positive pairs is not new. Many previous works have explored this idea already, e.g. [1]. 2. Many related works are not compared, e.g. [2, 3]. There are also some other works that I do not mention. [1] Hénaff O J, Koppula S, Alayrac J B, et al. Efficient visual pretraining with contrastive detection[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision. 2021: 10086-10096. [2] Xie Z, Lin Y, Zhang Z, et al. Propagate yourself: Exploring pixel-level consistency for unsupervised visual representation learning[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2021: 16684-16693. [3] Xiao T, Reed C J, Wang X, et al. Region similarity representation learning[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision. 2021: 10539-10548. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: The authors claimed that they demonstrate for the first time the connection between the attention mechanism mean-shift clustering. However, there are already some works that demonstrate the connection between attention and clustering, e.g. [1]. Does this paper bring any new insights? [1] Zhou D, Yu Z, Xie E, et al. Understanding the robustness in vision transformers[C]//International Conference on Machine Learning. PMLR, 2022: 27378-27394. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: The authors have addressed the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: \ __Weaknesses:__ __1.__ _The idea of considering similar pixels or patches as positive pairs is not new. Many previous works have explored this idea already, e.g. [1]._ Thanks for bringing [1] to our attention. Yes, FLSL belongs to the family of SSL methods that consider similar pixels or patches as positive pairs. We have discussed in the related work section the representative SSL methods in this area, i.e., SoCo, ORL, PixPro, LC-loss, SelfPatch+DINO, and ADCLR, etc. The work in [1] relies on dedicated algorithms (e.g., FH or MCG) to determine the mask of similar pixels/patches, which is analogous to SoCo and ORL that leverage non-trainable _selective search_ algorithms to find the RoIs as positive pairs. In contrast, FLSL is end-to-end trainable. It does not relies on any dedicated non-trainable algorithm for cluster determination. In contrast, FLSL leverages mean-shift clustering as self- and cross-attention to automatically find a positive pair of soft clusters. This accounts for one of our main contributions. We will incorporate [1] to the related work section, and clarify our contributions further. \ __2.__ _Many related works are not compared, e.g. [2, 3]. There are also some other works that I do not mention._ Thanks for pointing out these related works. Indeed, there are many related works on this topic, and we incorporated the most representative ones in the paper, including SoCo, ORL, PixPro, LC-loss, SelfPatch+DINO, and ADCLR, etc. Paper [2], PixPro, has been discussed in related work. Paper [3] takes a contrastive learning strategy to maximize the similarity of global representations and the sliding-window-pooled representations in the overlapped region of two augmented views. There are several existing works considering this joint consistency on views of image and local patches (e.g., DetCo, which has been discussed in our paper). We will include paper [3] in related work too. \ __Questions:__ __1.__ _The authors claimed that they demonstrate for the 1st time the connection between the attention mechanism and mean-shift clustering. However, there are already some works that demonstrate the connection between attention and clustering, e.g. [1]. Does this paper bring any new insights?_ Thanks for pointing out this related work. Paper [1] tries to interpret the attention mechanism from the perspective of information bottleneck (IB), which is further related to clustering. From this perspective, paper [1] is relevant to ours. However, the formulation of IB in paper [1] is essentially an EM-fitting of a GMM with soft assignment under certain major assumptions (i.e., Gaussian approximation in KL minimization and small smoothing scale as shown in their proof provided in the appendix). In contrast with soft GMM, mean-shift clustering is non-parametric (KDE) and poses no priori assumption of the underlying clusters (e.g., unknown number of clusters, no KL minimization involved). This makes mean-shift clustering closer in line with the attention mechanism that does not make much assumption on the input. We will cite paper [1] and include this discussion in related work. --- Rebuttal Comment 1.1: Comment: The authors have addressed my concerns.
Summary: The authors point out the limitations of previous SSL methods on dense prediction tasks, because of tis instance-level objectives. On the other hand, recent studies focusing on dense prediction based on region, patch, and pixel to learn globally semantic representations on these sub-regions. To this end, the paper introduces a novel method that learns representations with both local and global semantics by leveraging the mean-shift clustering. The proposed method consists of intra-view clustering with mean-shift and inter-view clustering with k-means. The extensive experimental results shows its effectiveness on various dense prediction tasks. Strengths: * Well-written paper with superior performances * Conducted experiments on various dense prediction tasks Weaknesses: * Some SSL methods that are strong in dense prediction tasks are omitted from the tables. (e.g., iBOT and RC-MAE) * For example, iBOT with ViT-S/16 outperform the methods in Table 1 in terms of AP^bbox and AP^mask. RC-MAE is also comparable to iBOT. * The algorithm table in the appendix seems to have a gap with the equations of the proposed method. Is there some omitted explanation in the algorithm table? * It would be better to compare the AAS of the method with other state-of-the-art methods so that the qualitative results can visualize distinctive behaviors. * Minor * It would be better for the highest values in the tables to be bold. [1] Jinghao Zhou, Chen Wei, Huiyu Wang, Wei Shen, Cihang Xie, Alan Yuille, and Tao Kong. "ibot: Image bert pre-training with online tokenizer," ICLR'22 [2] Youngwan Lee, Jeffrey Willette, Jonghee Kim, Juho Lee, Sung Ju Hwang, "Exploring The Role of Mean Teachers in Self-supervised Masked Auto-Encoders," ICLR'23. Technical Quality: 3 good Clarity: 3 good Questions for Authors: * Is the proposed method also competitive in terms of throughput? * The scales of random resized crop for each networks seem to have high lower bound compared to previous SSL methods. Is there any reason that the lower bound of scales should be relatively high? * Does the FLSL also show superiority in the fine-tuning task on ImageNet-1K? Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: The potential limitations are addressed well. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: \ __Weaknesses:__ __1.__ _Some SSL methods that are strong in dense prediction tasks are omitted from the tables. (e.g., iBOT and RC-MAE). For example, iBOT with ViT-S/16 outperform the methods in Table 1 in terms of $AP^{bbox}$ and $AP^{mask}$. RC-MAE is also comparable to iBOT._ Thanks for pointing out these related works. We discussed iBOT in the related work section. We choose ViT-S/* as backbone of FLSL and Mask R-CNN as detector because (1) this benchmark has a lower computational cost, and (2) there are more baselines to compare with. We did not include iBOT results in Table 1 because iBOT employs the **Cascade** Mask R-CNN as detector, which is more complex and expensive than Mask R-CNN, leading to an unfair comparison. RC-MAE only provides the results of ViT-B/16 with Mask R-CNN, and we will include its results in Table 2. \ __2.__ _The algorithm table in the appendix seems to have a gap with the equations of the proposed method. Is there some omitted explanation in the algorithm table?_ A constant $\log K$ (as a result of a uniform prior $\pi=1/K$) is omitted in the algorithm table. Specifically, with a uniform prior $\pi=1/K$, the KL divergence term in the objective function (13) reduces to the entropy of the student prediction plus a constant $\log K$, the latter of which is omitted in the algorithm table. \ __3.__ _It would be better to compare the AAS of the method with other state-of-the-art methods so that the qualitative results can visualize distinctive behaviors._ This is an excellent suggestion. We provided a side-by-side comparison of AAS visualization between FLSL and DINO in the general rebuttal pdf file. As we can see, FLSL leads to AAS better aligned with underlying objects and stuff, and captures more objects alongside the label-related object in an image, while DINO tends to single out the label-related tokens and drives the tokens in the rest of an image to be highly correlated. We will add this visualization and discussion to the appendix. \ __4.__ _It would be better for the highest values in the tables to be bold._ We highlighted the FLSL results in light blue in the tables. As suggested, we will highlight the highest values with boldface for better visibility. \ __Questions:__ __1.__ _Is the proposed method also competitive in terms of throughput?_ The per-epoch training time of FLSL on ViT-S/16 is 1.19x longer than DINO and is on par with SelfPatch, which is 1.21x longer than DINO, under the same model and hardware configuration. We will include this discussion in the main paper. \ __2.__ _The scales of random resized crop for each network seem to have high lower bound compared to previous SSL methods. Is there any reason that the lower bound of scales should be relatively high?_ We set a higher lower-bound to include more contextual information in each view to help the model to learn representations at higher semantic level. Specifically, in FLSL, a positive cluster representation is retrieved via mean-shift cluster assignment in the form of cross-attention. Consider a query token from a source local crop that is very small and only contains a shade of a single color and no structured features, the cross-view cluster assignment of that query would result in a cluster of all patches of the similar color in the target view. This restricts the semantic level to colors and hinders the model from learning meaningful representations at higher semantic level that better aligns with objects or stuff. Therefore, a higher lower-bound can provide more contextual information to facilitate FLSL to learn meaningful representations. We will clarify this further in Appendix 4. \ __3.__ _Does the FLSL also show superiority in the fine-tuning task on ImageNet-1K?_ We did not consider fine-tuning on classification task because FLSL is designed mainly for dense prediction tasks and there is no [class] token involved. As a future work, we will explore ways to extend FLSL for tasks that necessities a global representation while retaining its existing properties.
Rebuttal 1: Rebuttal: Please find the attached pdf file that includes (Figure 1) AAS visual comparison between FLSL and DINO, and (Table 1) ADE20K semantic segmentation results. Pdf: /pdf/2eb6ba617723dd126a7f71b6e43c8bed8bbf134f.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: This paper presents FLSL a self-supervised learning model designed to give good performances of the pre-trained model on downstream dense prediction tasks. The proposed method can be summarised as a two-level clustering problem to achieve this objective : - Intra-view clustering: which aims to cluster together semantically related embeddings and pull apart those which are not. This is achieved using cluster representative which are obtained using mean-shift clustering. The embeddings are then drawn to their representatives. The paper demonstrated the relation between mean-shift and self-attention which is used of that purpose - Inter-view clustering: the goal is facilitate the representations of semantically similar features across the dataset to be close. This is done by applying a soft-assignment of the representatives obtained previously to a set centroids. Strengths: 1. The demonstration of the relation between mean-shift and self-attention is a strong contribution of this paper and it plays a key role in the intra-view clustering objective 2. The proposed method outperforms other SSL frameworks on standard dense benchmarks(ms-coco, uavdt, davis) 3. The paper is well written and the authors provided proofs for their claims as well as code and implementation details for reproducibility. Weaknesses: 1. This paper uses a bbox-aligned k-NN classifier to perform the ablation studies of their work, but we don't know how the performances on this classifier correlates with the results on downstream tasks segmentation or detection. 2. It would have been good to have further evaluations on semantic segmentation on other datasets such as ADE20k, PascalVOC, cityscapes to get a broader view of the performances of the model Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. what is the computational cost of the proposed method compared to other SSL frameworks? 2. Did you evaluate the importance of the different levels of clustering? like how much each participate to the actual benefits on downstream performances. I see some results in Table 5., but it shows the performance with k-NN and always considers the inter-view clustering. What happens when we do not consider the inter-view clustering? 3. Also noticed that using smaller patches on the backbone in FLSL (e.g. Vit-s/8) seems to be better compared to backbone with larger patches(16), do you know the reasons behind this behaviour? (in table 1 and 3) Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: no particular limitations which need to be addressed. The main limitations of this work can be the complexity of the method which the authors spoke about and the potentially computational cost of the framework. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: \ __Weaknesses:__ __1.__ _How the performance on bbox-aligned k-NN classifier correlates with dense prediction tasks?_ Thanks for raising this question. FLSL learns dense semantic representations rather than a single instance-level representation. Therefore, we design a bbox-aligned k-NN classifier to evaluate the feature quality for hyperparameter tuning and ablation study. As detailed in Appendix 5, to make this classifier correlates with the results on downstream tasks of detection or segmentation, the bounding box information provided by ImageNet-1K is leveraged during validation. Specifically, to have a higher bbox-aligned k-NN accuracy, the FLSL-learned representations in the region of ground-truth bounding-boxes should be both locally and globally semantic, such that the extracted local features are indeed from the same object in an image while being close to the bbox-aligned features (of the same category) extracted by FLSL in training images for k-NN classification. Our empirical study also shows the effectiveness of this bbox-aligned k-NN classifier for downstream tasks of detection and segmentation. We will clarify this further in Appendix 5. \ __2.__ _It would have been good to have further evaluations on semantic segmentation on other datasets such as ADE20k, PascalVOC, cityscapes to get a broader view of the performances of the model._ Thanks for the suggestion. Please see the general rebuttal pdf file, where we provided ADE20k semantic segmentation results of FLSL compared with baseline methods. In line with SelfPatch, all models are fine-tuned with Semantic FPN under the standard 40k iteration schedule. Similar to the results on COCO object segmentation and DAVIS instance segmentation, FLSL consistently outperforms all the baseline methods on ADE20k. We will include this result in the appendix. \ __Questions:__ __1.__ _What is the computational cost of the proposed method compared to other SSL frameworks?_ The per-epoch training time of FLSL on ViT-S/16 is 1.19x longer than DINO and is on par with SelfPatch, which is 1.21x longer than DINO, under the same model and hardware configuration. We will include this discussion in the main paper. \ __2.__ _Did you evaluate the importance of the different levels of clustering? like how much each participate to the actual benefits on downstream performances. I see some results in Table 5., but it shows the performance with k-NN and always considers the inter-view clustering. What happens when we do not consider the inter-view clustering?_ Yes, Table 5 provides the ablation study of the importance of the different levels of clustering. We always consider inter-view clustering (e.g., $\eta=1$) because it governs the global meaningfulness of the representations. Without considering the inter-view clustering, the 2nd half of the FLSL pipeline in Figure 2 is discarded and the intra-view clustering alone leads to collapse in training and cannot learn semantically meaningful representations. We will clarify this in the caption of Table 5. \ __3.__ _Also noticed that using smaller patches on the backbone in FLSL (e.g. Vit-s/8) seems to be better compared to backbone with larger patches (16), do you know the reasons behind this behaviour? (in Table 1 and 3)_ This is because smaller patches results in higher resolution of feature maps, which benefits not only dense prediction tasks but also classification (see e.g., [1]). [1] Caron et al., Emerging properties in self-supervised vision transformers, ICCV 2021.
null
null
null
null
null
null
Self-Supervised Learning of Representations for Space Generates Multi-Modular Grid Cells
Accept (poster)
Summary: Shows how grid modules of grid cells can emerge as solutions to a self-supervised learning framework, implemented as a recurrent neural network. They take insights from Continuous Attractor models (velocity dependent weights), Dorrell et al (2023) representation theory (path invariance), and ideas about efficient coding (Sreenivasan & Fiete), and put them all within a single SSL framework, based on three loss functions for RNNs - maximising separation of distinct locations, path invariance and high capacity for encoding locations. The simulation results show modules of grid cells are formed. Strengths: The single theoretical framework for explaining various aspects of the grid code is a strength Weaknesses: The advance on previous work is not so clear, given that each aspect has been presented previously, with representation theory covered by Dorrell et al., nice analysis of the emergence of grid codes in RNNs in Sorscher et al., and coding efficiency in Sreenivasan and Fiete, and using the basline continuous attractor model of RNNs with velocity dependent weights for path integration. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Given the eventual aim to understand what is necessary for the emergence of grids in RNNs, should Sorscher et al have been cited earlier? Or the aim to understand the effect of environmental manipulations such as rewards, should Nayebi et al (NeurIPS 2021) be cited? For the eventual aim of pushing this framework towards machine learning in general domains such as vision, audition etc, what will correspond to velocity, given the reliance on a framework based on path integration? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank Reviewer 1Nr3 for their time and feedback and suggestions. Replying to each comment, suggestion and/or question in turn: > The advance on previous work is not so clear, given that each aspect has been presented previously, with representation theory covered by Dorrell et al., nice analysis of the emergence of grid codes in RNNs in Sorscher et al., and coding efficiency in Sreenivasan and Fiete, and using the basline continuous attractor model of RNNs with velocity dependent weights for path integration. We thank the reviewer for the opportunity to better explain why our work is a significant and novel contribution. **_We devoted our entire Global Response to answering this question_**. We refer the reviewer to this response. In short, there are four approaches (that we identify and that you correctly reiterate) and each has at least 1 significant limitation. We use insights from all four perspectives and *solve all their limitations* by contributing a unified model that elegantly combines the strengths of these approaches. Please let us know if you have any follow up questions! Additionally, the difference between supervised and self-supervised learning is _significant_: In the supervised setting, for each time point, the loss function has access to the absolute position of the agent in the environment. In the self-supervised case, for each pair of positions, the loss function has 1 coarse-grained bit of information: Whether the current position is within $\sigma_x$ of the previously considered position. Consequently, showing a successful solution in self-supervised learning is a significant advance beyond a supervised learning solution. > Given the eventual aim to understand what is necessary for the emergence of grids in RNNs, should Sorscher et al have been cited earlier? We’re not quite sure we understand this suggestion. We cite Sorscher et al. 2019 on Page 1 in the second paragraph, after introducing grid cells in the first paragraph, and again on Page 2. Could you please clarify? > Or the aim to understand the effect of environmental manipulations such as rewards, should Nayebi et al (NeurIPS 2021) be cited? Nayebi et al. 2021’s primary contribution is regressing candidate artificial neural networks’ activations against mouse electrophysiology recordings from medial entorhinal cortex. As you correctly state, their Section 5 also studies how trajectory statistics (e.g., a preference of moving to a particular location, understood to be a “reward” location) affect the artificial neural population similarly to reward-driven remapping in biological neural populations. Our paper does not regress artificial and biological representations, nor does our paper study how those regressions are affected by trajectory statistics, and so respectfully, we feel Nayebi et al. 2021 is a bit distant to merit a citation. > For the eventual aim of pushing this framework towards machine learning in general domains such as vision, audition etc, what will correspond to velocity, given the reliance on a framework based on path integration? That’s a great question and one that we see as wide-open! In vision, one might imagine an agent following some “path” in view-space i.e. moving in a way such that the agent returns to the agent’s original vantage point. This can be related to visual saccades produced by primate vision. In audition, one might take inspiration from Aronov, Nevers & Tank Nature 2017, although the analogy is admittedly less clear. This is why we suggest these other modalities as possible future directions. --- Rebuttal Comment 1.1: Title: Author Response Comment: Dear Reviewer, Since the author-reviewer discussion period is coming to an end this week, we request the reviewer to consider increasing their score if we have addressed the scientific concerns raised in their review. Specifically, we have devoted our entire global response to better motivate our work and compare it to the previous approaches to grid cell emergence that the reviewer has correctly pointed out. In particular, we clearly state the limitations of each previous approach and show how we have combined insights from all these approaches to overcome every limitation.
Summary: The paper proposes a computational framework for the emergence of grid cells in the mammalian cortex through self-supervised learning. The learning objective is formulated combining requirements of path independence for location code, error-correcting coding, efficient coding. Validity of the approach is demonstrated through numerical experiments on a recurrent neural network. Resulting cells reproduce important properties of grid cells observed in biology: cells organized into modules with common spatial frequency and orientation; modules exist for a range of frequencies; cells in a single module regularly tile the space. Strengths: The paper formulates a principled and biologically meaningful optimization problem and arrives at a representation that manifests important properties of grid cells. The role of each component of the objective function is studied experimentally. Rich future directions outlined in Discussion section. Weaknesses: Authors do not discuss relation of their work to the literature arguing for grid cells role in predictive representation (e.g., Stachenfeld et al, 2017; Momannejad 2020). The biological plausibility of proposed learning procedure is also not discussed. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: Do all learning batches start from the same implied position x? Otherwise it's hard to imagine coordination of position codes between batches. Reader needs at least some elaboration on updates of W. What's a ratemap? Is it a reception field? I could not decipher figure 5a, left. line 199-200: do you mean arena is larger in testing than in training? how can the value for the low spatial frequency be found in testing? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 3 good Contribution: 4 excellent Limitations: See weaknesses Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ## Author Response We thank Reviewer crXg for their time, positive feedback and high score. Replying to each comment, suggestion and/or question in turn: > Authors do not discuss relation of their work to the literature arguing for grid cells role in predictive representation (e.g., Stachenfeld et al, 2017; Momannejad 2020). Stachenfeld et al, 2017; Momannejad 2020 are based on the successor representation (in reinforcement learning) for learning predictive representations, resulting in a place cell-like population code as the successor map) are a normative model for place cell formation. Given a place cell representation, they show that eigendecomposition of their place cell representation results in vectors that are periodic. This point is essentially the same as Dordek et al. 2016 (non-negative PCA of place cells results in grid cells), which was subsequently used by Sorscher et al. 2019 to design better supervised targets for recurrent neural networks. We will add a connection to this line of work with appropriate citations to our Discussion. Thank you for helping us improve the paper! > The biological plausibility of proposed learning procedure is also not discussed. To clarify, are you inquiring about the biological plausibility of the data (i.e. the trajectories and their permutations), of the network (i.e. the architecture), of the learning algorithm (i.e. backpropagation), or of something else? > Reader needs at least some elaboration on updates of W. Could you please clarify what additional information you’re looking for? W is a square matrix output by an MLP whose inputs are 2D cartesian velocity. We’d be happy to provide whatever additional information once we understand. Could the reviewer provide us with more specific questions or doubts about W? > What's a ratemap? Is it a reception field? Indeed, a ratemap is a receptive field over spatial position. Specifically, one ratemap shows one neuron’s (average) activation values at different spatial locations. A ratemap is computed by partitioning physical space into square bins (typically ~5 cm by 5 cm) and then computing the neuron’s average activation, averaged over all instances that the animal/network passes through the spatial bin. Ratemaps are a standard tool in the neuroscience of spatial navigation, used to visualize the spatial tuning of neurons. We will add a small section in the appendix describing the construction of ratemaps. > Do all learning batches start from the same implied position x? Otherwise it's hard to imagine coordination of position codes between batches. Yes, all batches start from the same implied position. We have added this clarification in the main text. > line 199-200: do you mean arena is larger in testing than in training? how can the value for the low spatial frequency be found in testing? Yes, the testing trajectories are much longer than the training trajectories. --- Rebuttal Comment 1.1: Comment: Thanks for detailed feedback. I have no more issues regarding biological plausibility or W updates. Ratemap: a brief definition of the term in the main text would help the uninitiated reader. On testing: My question is about the size of the arena, which is not the same thing as the trajectory length, right? Larger arena would have areas unexplored during training, and that is my confusion. --- Reply to Comment 1.1.1: Title: Author response to reviewer cRXg Comment: Thank you for your specific question. Generalization to arenas larger than the training arena is a key advance of our work (relative to supervised learning approaches) and is in line with previous theoretical work on grid cell coding. The sizes of the arenas in Fig. 5 are larger than the trajectory length, as you have correctly pointed out - this is generalization by extrapolation. Grid modules provide unique positional representations upto a scale that is exponential in the number of modules - see Fiete et al (2008) [1] for theoretical arguments supporting this (also, see below for our intuitive explanation of this paper). Once a multimodular representation has been learnt on the training trajectories, the network can generalize to much larger arenas. The capacity loss provides the key top down inductive bias for this generalization capability (or else the network overfits to the training trajectories, learning a single grid scale which is roughly the size of the training trajectories, as we have shown in our ablation experiments in Fig 7a). This is also why we refer to the capacity loss as "capacity": it allows the network to represent many more locations uniquely. In light of these arguments and explanations,if there are no further scientific questions, we would request the review to please consider revising your score. We will add a paragraph better contextualizing these results to our paper along with the relevant citation. [1] What Grid Cells Convey about Rat Location. Ila Fiete, Yoram Burak and Ted Brookings, Journal of Neuroscience 2 July 2008, 28 (27) 6858-6871 To intuitively explain this result, multiple modules is akin to decimal or binary: with each additional digit or bit, the number of unique numbers grows exponentially. Grid cells are a little different for two reasons: (1) the scales of each grid module can be of the same order of magnitude, and (2) each digit/bit can be updated in parallel by path integration rather than sequentially (i.e. there’s no need to carryover when updating). Another intuitive perspective is a classic number theory argument: the chinese remainder theorem (https://en.wikipedia.org/wiki/Chinese_remainder_theorem): For example, to uniquely represent numbers up to 23, one can follow two schemes: Scheme 1: Have one box for each number, totalling 23 boxes. Scheme 2: Represent the number with remainders after dividing by 3, 5 and 7: This totals 3+5+7=15 boxes. For larger numbers, ratio of boxes between scheme 1 and scheme 2 increases exponentially. For the given example of 3, 5 and 7, one can actually represent numbers up to 3$\times$5$\times$7 = 105 uniquely. You can think of each remainder as one single grid module.
Summary: This work reviews some of the issues with existing models of grid cells (cells in mammalian brains that fire when the animal is located at the vertices of a hexagonal grid) and suggests a new model based on recurrent neural networks (RNNs). The model is self-supervised, eliminating the worry that the structure of the readout in a supervised task could influence the outcome. Besides leading generically to grid-like firing, the model also exhibits multiple grid scales organized in modules such that the scale is the same within each module but the phase varies. Strengths: *Originality:* The paper suggests a way for grid cells to emerge from a self-supervised learning (SSL) paradigm, in contrast to previous work which works mostly in a supervised regime. *Quality:* The paper does a good job of surveying prior work and includes a fair amount of simulations. *Clarity:* The presentation is generally clear. *Significance:* Grid cells are of tremendous interest in neuroscience and there is a considerable volume of work attempting to explain their properties and the reasons behind their existence. This paper provides a novel model for how grid cell may emerge – as a self-supervised means of keeping track of an animal's location in space – and is thus of great interest to neuroscientists. Weaknesses: 1. The authors present this work as a significant advance over methods based on supervised learning because the latter depend on specific design choices. However, the same seems to be true in the new approach: for instance, while the separation and path-invariance loss are pretty natural, the capacity loss is counter-intuitive, as mentioned even by the authors in the Discussion. Excluding the capacity loss eliminates the multi-scale nature of the solution. Moreover, the emergence of grid cells is sensitive to the parameters used in the loss, as shown in Figure 7. It is thus not immediately obvious that the proposed method requires any less fine tuning to lead to grid cells than prior models. 2. Ideally the code used to run all the simulations would have been included with the supplementary material. It was promised for after acceptance, but this seems hard to justify, since the code exists already, and it could be useful for a thorough evaluation of the paper. 3. Most of the figure panels should be significantly larger – Figure 7 is a particularly bad example. I understand that space is a limiting factor, but some progress can be made by including fewer ratemaps (don't see the need for more than 3x3 or 4x2 examples of each kind). Also ensure that font sizes don't go below 7 or 8 – when they do, it may be better to just remove the text because it is very inconvenient to read. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: 1. The permutation invariance discussed around eq. (2) assumes Euclidean space, where the order of the steps does not change the outcome of a path. This seems overly restrictive since a typical animal's habitat is unlikely to feature a flat 2d plane, but instead is more likely to contain obstacles, hills, tunnels, etc. Can the authors comment on how their method might adapt to such cases? 2. The motivation for the capacity loss in eq. (9) seems a bit obscure. This was touched upon briefly in the Discussion, but it would be useful for the authors to indicate how they came up with this particular formulation. Also, were other capacity loss functions attempted, and how did they work? Minor comments: * line 127: $f$ is used here with a different meaning from $f$ in, e.g., eqns. (2) or (3), which is a bit confusing * Figure 6, panel c: what non-linear dimensionality reduction technique was used? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: The authors have adequately discussed limitations of their work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ## Author Response We thank Reviewer dHUQ for their time and feedback. Replying to each comment, suggestion and/or question in turn: > The authors present this work as a significant advance over methods based on supervised learning because the latter depend on specific design choices. However, the same seems to be true in the new approach [...] It is thus not immediately obvious that the proposed method requires any less fine tuning to lead to grid cells than prior models. Thank you for giving us a chance to clarify! We want to make three points. 1. Firstly, the prior supervised path-integrating network papers are insightful and valuable. Our paper would not be possible without their contributions. 2. Secondly, we want to reiterate that our paper is a significant contribution not just to supervised path-integrating networks but to three other lineages of understanding grid cells: basis function optimization, coding theory and continuous attractor models . As we explain in our Global Response, all four lines of work have at least one significant limitation, and our paper solves all four lineages’ limitations simultaneously. This is why we feel our paper is a novel & significant contribution. 3. Thirdly, the limitations of supervised path-integrating networks that we identify are more nuanced than they “depend on specific design choices.” To explain generally the shortcomings we see with supervised path-integrating networks, there are several. See Approach 2 of the Global Response for an extensive discussion. > the capacity loss is counter-intuitive, as mentioned even by the authors in the Discussion. > The motivation for the capacity loss in eq. (9) seems a bit obscure. This was touched upon briefly in the Discussion, but it would be useful for the authors to indicate how they came up with this particular formulation. We thank the reviewer for the opportunity to better explain the capacity loss. The capacity loss is one of our conceptual breakthroughs, and a key component of our paper. Previous theory work [Fiete, Burak, Brookings 2008; Sreenivasan & Fiete 2011] identified a high coding capacity as one of the key properties the grid code provides. The capacity loss demands from the network: “Use as little of your total available coding volume as possible on the training data (subject to separation and path invariance).” This is linked to generalization. On the training data, the network learns how to dynamically evolve its representations in a manner consistent with velocity inputs. The network could learn a ‘shortcut solution’ (single scale solution) and use up all its coding volume on the training data, but then it will fail to generalize to longer trajectories (Fig. 3eg). The capacity loss provides a top-down inductive bias to prevent this ‘shortcut solution’. > Also, were other capacity loss functions attempted, and how did they work? No, no other capacity loss functions were attempted. We can explain how this capacity loss came to be. We found that the first two loss terms reliably produced one grid module (i.e. all units sharing one period, as shown in our ablation figure) but that the period scaled to match the training trajectory length. The problem is that there’s no incentive to ensure that a location outside the training distribution has a different representation than a location inside the training distribution, but that’s precisely what we needed. How could we incentivize generalization? Fig 3efgh was the conceptual breakthrough: Fig3e and Fig3f both achieve the same _training_ loss, but only Fig3f generalizes. Thus, we needed a loss term to prefer Fig3f over Fig3e, and once we conceptualized Fig 3e and 3f the first loss function we thought of was the paper’s current capacity loss. > 2. Ideally the code used to run all the simulations would have been included with the supplementary material. It was promised for after acceptance, but this seems hard to justify, since the code exists already, and it could be useful for a thorough evaluation of the paper. While NeurIPS does encourage submitting code, doing so is not a requirement and it seems unfair to penalize us for failing to do something that is not required. Moreover, submitting code is not without risk: unintentionally failing to completely anonymize the code can result in an immediate rejection. > 3. Most of the figure panels should be significantly larger – Figure 7 is a particularly bad example. Agreed. We will improve the figures as suggested. > 1. The permutation invariance discussed around eq. (2) assumes Euclidean space [...] This seems overly restrictive since a typical animal's habitat is unlikely to feature a flat 2d plane [...] Can the authors comment on how their method might adapt to such cases? We thank you for raising this question! To the best of our knowledge, previous modeling work typically assumes flat 2d Euclidean space. It is certainly an interesting point of discussion. Grid cells have been observed in bats navigating in 3D (Yartsev & Ulanovsky 2011 and Ginosar et al. 2021) and at least one paper has studied grid cells in higher dimensional spaces (Klukas, Lewis, Fiete 2020). However, in this work, we follow the majority of previous modeling work that focuses on flat 2D Euclidean space. We leave 3D for future work. We have added this to our discussion section. > Figure 6, panel c: what non-linear dimensionality reduction technique was used? We qualitatively followed the methodology used by seminal experimental papers examining the topology of neural representations e.g., Chaudhuri et al. Nature neuroscience 2019 and Gardner et al. Nature 2022: we used principal components analysis to 6 dimensions followed by a non-linear dimensionality reduction to 3 dimensions (in our case, spectral embedding). Similar results are obtained if one uses Isomap. We will add an explanation and citations to these works in the appropriate section. --- Rebuttal Comment 1.1: Comment: Thanks for the explanation! Regarding sharing the code, a couple of points: first, I don't think my score would have been different had you included it. That does not mean the lack of the code is not a weakness. Second, had NeurIPS *required* the code to be submitted, the paper wouldn't have just been penalized for not including it, but it would have been rejected. That of course is not the case. It seems reasonable to me to be penalized for failing to do something you "are strongly encouraged" to do (to quote from the submission guidelines)—indeed, it would seem like a rather empty suggestion if it had no incentive to push authors to follow it. Finally, I sincerely doubt any work would be rejected for unintentionally sharing your identities in shared code. Most of the submissions I've reviewed have had code included and everything went fine. --- Reply to Comment 1.1.1: Title: Author response to reviewer dHUQ Comment: Thank you. For future submissions, we will ensure we have code with the submission. Here, we would like to take the opportunity to reiterate the difference between a supervised and self-supervised approach: In the supervised learning approaches (e.g. Banino et al, 2018; Sorscher et al, 2019), for each time point, the loss function has access to the absolute position of the agent in the environment. It’s unrealistic to assume that biological agents have access to their absolute spatial position at all times. In the self-supervised case, for each pair of positions, the loss function has 1 coarse-grained bit of information: Whether the current position is within $\sigma_x$ of the previously considered position. This is the key difference between the 2 approaches and the fact that our self-supervised networks learn multiple modules of grids despite having access to only this impoverished version of spatial information is the significant, non-trivial advance. We have varied hyperparameters of the loss (specifically the coding scale $\sigma_g$) to show the different representations that can emerge - unlike supervised approaches that claim generality far beyond their empirical results. If there are no further scientific questions, please consider revising your score.
Summary: The paper shows that recurrent networks trained with a "self-supervised" loss leads to units of the internal representations that organize as grid cells. In particular, the paper defines a loss that promotes separations between neural representations encoding different spatial locations, encourages a representation to be invariant to different possible paths taken leading to it's representation, and maximizing the capacity of the representation, and the authors use paired velocities and neural representations as their dataset. Strengths: - The paper was very clearly written and organized, with nice visuals that supported the text. Moreover, the paper provided a helpful background of previous research that was relevant to the formulation used in the paper. - The paper defines a loss that is nicely linked to existing theories for the emergence of grid cells, and shows that this loss, optimized using gradient descent leads emergence of grid cells in artificial recurrent networks. - The authors performed ablations of the hyperparameters in their loss to show the dependence of their results on the different terms Weaknesses: - How robust are the results to other hyperparameters, like batch-size, learning rate, etc? - While the authors criticize previous work that identified the emergence of grid cells using supervised RNN to specifics of the target function (line 55), the authors do not seem to properly explain how their setup differs, and does not lead to similar implicit assumptions. For example, what is the difference between the velocities being used as a supervised signal (which the authors criticize), versus incoporating implicitly into their dataset and self-supervised loss? Are there similar assumptions with respect to the creation of the dataset (e.g having a sufficient number of examples with overlapping positions) - Further, the authors assert that "SSL mitigates the need for large scale supervised data", but it is unclear to me how different it is to incorporate the velocities as a paired dataset rather than a target variable for a supervised objective. - (Minor) The claim in the discussion "how might ... SSL principles be applied to drive computational neuroscience forward" seems too general. - (Minor) Difficult to read text in Figure 7 - (Minor) extra italics on t in line 138 Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: - Do all the internal units have grid cell properties, or only a fraction? - Related, but rather than show example units that look like grid cells, are there any metrics that quantify the extent to which a cell is a grid cell? - Are there any predictions that can be made, for example, what might happen for animals that explore 3d space? - Can the authors comment on the relationship between the toroid's and the encoding of space to make the paper self-contained (e.g. schematics from Fig. 2)? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ## Author Response We thank Reviewer keKQ for their time, feedback and high score. Replying to each comment, suggestion and/or question in turn: > How robust are the results to other hyperparameters, like batch-size, learning rate, etc? > e.g having a sufficient number of examples with overlapping positions This is a good question. We have some initial answers but more work is necessary and already underway! For context, SSL is notoriously compute hungry (see Balestriero et al. 2023), and training one of our SSL networks requires a 16 GB GPU for ~7 days, whereas a supervised path-integrating network from previous papers takes an ~8 GB GPU for ~6 hours. We have found that batch sizes of 120, 130 and 150 all work equally well; we haven’t tried smaller batch sizes, and larger batch sizes can result in OOM errors. We tested learning rates in {0.002, 0.0002, 0.00002} x optimizers in {Adam, AdamW, NAdam, NAdamW} x learning rate schedulers in {Reduce LR on Plateau, Linear Warmup with Cosine Annealing} and found different combinations have different learning curves but reach the same result. We’re currently running additional experiments and ablations. Regarding sufficiently many overlapping positions, we have not yet tested this rigorously. We conjecture that some fraction of overlapping positions is necessary for two reasons: (1) no overlapping positions is akin to contrastive SSL without a sufficient number of positive pairs, and (2) in the past, we tried sampling trajectories without overlapping positions and random representations emerged because the network does not need to path integrate; rather, it only needs to ensure it never repeats a code word. We intend to test what fraction of overlapping points is necessary to put in the appendix but have not yet been able to prioritize that experiment. > While the authors criticize previous work that identified the emergence of grid cells using supervised RNN to specifics of the target function (line 55), Line 55 was unintentionally harsh. We will rewrite this. > the authors do not seem to properly explain how their setup differs, and does not lead to similar implicit assumptions. This is good feedback! We will do a significantly better job at distinguishing our approach from the previous supervised learning approach. We have devoted our Global Response (Approach 2) to this point. - To your question about distinguishing ourselves from previous approaches, please see the Global Response. - To explain generally the shortcomings we see with supervised path-integrating networks see Approach 2 of the Global Response > Further, the authors assert that "SSL mitigates the need for large scale supervised data" To contextualize that sentence, the sentence is in our Background and reads: “SSL is increasingly gaining popularity as a normative approach in neuroscience since SSL mitigates the need for large-scaled supervised data that biological agents do not possess.” We think this is a general viewpoint across multiple modalities (vision, language, audition) that many neuroscience & ML papers have made previously. We will add citations to prominent neuroAI work that uses self-supervised learning across modalities that make a similar point. In the particular modality of spatial navigation, we can comment on the feasibility of the self-supervised versus supervised setup. Recall that the goal is to learn a self-consistent representation of spatial position that is updated by velocity. It’s unrealistic to assume that biological agents have access to their absolute spatial position at all times; if they did, why would they need to learn their spatial position? Consequently, the supervised target bypasses the learning problem. In contrast, permuting trajectories is generally straightforward. For instance, if you walk to a grocery store twice, you might go North then West on the first trip, and you might go West then North on the second trip. We think our setup is much more realistic for biological agents. To further contrast supervised & SSL, we do not include velocities as a paired dataset. Rather we use velocities and convert it to a binary tensor of 1’s and 0’s, where 1 denotes that positions are within $\sigma_x$ of each other. This is a very low information, coarse-grained and impoverished learning target. > are there any metrics that quantify the extent to which a cell is a grid cell? There is indeed a metric! Unfortunately, the “grid score” used by previous works (such as Banino et al. 2018, Nayebi et al. 2021, Schaeffer et al. 2022) are applicable either for _hexagonal_ lattices or _square_ lattices but our multi-periodic lattices are all sheared. This means that the square or hexagonal grid score cannot be used for our networks to identify grid-like units. We suspect that the ``conformal isometry” loss function [first identified by Dehong Xu, Ruiqi Gao et al. 2022] might make our lattices more hexagonal and we are currently exploring this direction. > Predictions for 3d space We thank you for this question! To the best of our knowledge, the majority of previous work for grid cells assumes flat 2d Euclidean space. It is certainly an interesting point of discussion. Grid cells have been observed in 3D [Yartsev 2011, Ginosar et al. 2021] and at least one paper has studied them in higher dimensional spaces [Klukas et al. 2020]. However, we follow the majority of previous modeling work that focuses on 2D Euclidean space. We leave 3D for future work; we will add this to our Discussion. > The claim in the discussion "how might ... SSL principles be applied to drive computational neuroscience forward" seems too general. We’ll rework this sentence. We weren’t trying to say something grandiose, merely that we hope our paper might be useful to the field. > Difficult to read text in Figure 7 We will make the fonts in all our figures larger. --- Rebuttal Comment 1.1: Title: Author response Comment: Dear Reviewer, Since the author-reviewer discussion period is coming to an end this week, we request the reviewer to consider increasing their score if we have addressed the scientific concerns raised in their review. Specifically, we have expanded on the difference between a supervised and self-supervised approach in the Global Response. Here, we reiterate the key difference: In the supervised learning approaches (e.g. Banino et al, 2018; Sorscher et al, 2019), for each time point, the loss function has access to the absolute position of the agent in the environment. It’s unrealistic to assume that biological agents have access to their absolute spatial position at all times. In the self-supervised case, for each pair of positions, the loss function has 1 coarse-grained bit of information: Whether the current position is within $\sigma_x$ of the previously considered position. This is the key difference between the 2 approaches and the fact that our self-supervised networks learn multiple modules of grids despite having access to only this impoverished version of spatial information is the significant, non-trivial advance. --- Rebuttal Comment 1.2: Comment: I thank the authors for their reply. I remain concerned regarding the robustness of the results, which seems to arise from the lack of quantitative metric characterizing their grid cells; and instead only providing some example units from the different runs and ablations (without quantifying the fraction of cells with "grid-like" properties). I also suspect this may make characterizing the effect of the machine learning hyperparameters more difficult as well, once those runs complete. Given the lack of quantitative demonstration of the robustness of the results, it makes it difficult to gauge the significance of the findings. --- Reply to Comment 1.2.1: Title: We have now provided anonymized code Comment: Thank you for your reply. We have now anonymized our code and made it available at this anonymous google drive link: https://drive.google.com/drive/folders/1JNmdeTpJhktOoFJ-slC1l2AIqRSw3sAk?usp=drive_link Let us know if this code helps in your assessment of the robustness of our results. Thank you, Authors.
Rebuttal 1: Rebuttal: ## Global Response to Reviewers Here we take the opportunity to better motivate our paper, and to explain why we view it as a novel and significant contribution. In short, multiple approaches have been taken to understand grid cells, but each contains at least one limitation. Our paper combines their strengths to simultaneously solve their limitations. Broadly, there are four approaches to understanding grid cells: **Approach 1: Basis Function Optimization** (e.g. Dorrell et al. 2023) **Perspective:** Given functions of space constructed by linear combinations of sine and cosine basis functions, provides loss function(s) that makes these functions look grid-like. **Limitation(s):** 1. The central question is how one should learn a self-consistent representation of space updated by velocity inputs. This learning of an attractor with a continuum of fixed points is known to be a hard problem, covered in both classic papers (e.g. Seung 1996) and recent reviews (e.g. Khona and Fiete 2022). By explicitly starting with functions of space, Dorrell et al. 2023 avoid this problem. 2. There is no neural network component - this approach is pure basis function optimization 3. The assumed sine and cosine basis functions lie in the correct function class to learn periodic representations and thus provide an advantageous inductive bias that artificial networks lack. **Approach 2: Supervised Reconstruction of Supervised Spatial Targets by recurrent networks** (e.g., Banino et al 2018, Sorscher et al. 2019) **Perspective:** Supervised learning on spatial target functions sometimes leads to grid-like representations. **Limitation(s):** 1. By assuming access to supervised learning targets encoding privileged absolute spatial position, these approaches again bypass the central question. If biological agents already had their own absolute spatial position, they wouldn’t need to learn how to track their own absolute spatial position. 2. The design choices between our work and previous works are qualitatively different. Previous supervised learning papers crafted supervised targets to insert grid-like representations into the networks, sometimes contradicting known biological properties of place cells (Schaeffer et al. NeurIPS 2022). In contrast, our learning setup is motivated by first-principles properties of the grid representation identified by previous theory work. 3. Previous supervised papers claimed extreme generality, far beyond what the numerical results support. Schaeffer et al. 2022 showed that their results are tuned and their theory only sometimes predicts empirical results. The criticism of supervised path-integrating networks is this mismatch between claims and empirical results. We were very careful to write our paper in the language of “There exists” and not in the language of “For all” and have avoided using sweeping terms such as “generically” to describe our results. 4. Lastly, our results are different. Despite best efforts, previous supervised path-integrating recurrent neural networks did not learn multiple grid modules. In contrast, our networks do learn multiple modules. Additionally, our networks generalize outside the training distribution, whereas supervised networks fare poorly away from the training arena box where supervised targets have not been defined. Thus, this line of work is not about the task of path integration, but rather about the shape of targets to obtain grid-like tuning curves. Stachenfeld et al. 2017 and Momennejad et al. 2020 are related in that given a place cell representation (derived from a predictive approach), they show that eigendecomposition of their place cell representation results in vectors that are multiperiodic. This point is essentially the same as Dordek et al. 2016 (non-negative PCA of place cells results in vectors that are grid-like), which was subsequently used by Sorscher et al. 2019 to design better supervised targets for recurrent neural networks and was analytically shown. **Approach 3: Coding Theory** (e.g., Srinivasan and Fiete 2011) **Perspective:** Given the grid cell representation, what are the unique coding theoretic properties that this representation provides? **Limitation(s):** Lists many of the coding-theoretic properties of grid cells, but doesn’t test the necessity or sufficiency of those properties for generating grid cells via an optimization problem or provide explicit loss function(s). **Approach 4: Continuous Attractors** (e.g., Burak and Fiete, 2009) **Perspective:** What is the physical principle underlying grid cells? How do we build a mechanistic model that reproduces grid-like patterns using this principle? **Limitation(s):** 1. Does not provide an optimization problem that leads to grid cells, rather builds a mechanistic model by hand. Neuroscientists would say that there is no “normative” answer to why mammals have learned this particular representation over an evolutionary time scale. 2. Produces only a single module of grid cells. 3. This approach relies on the principle of pattern formation and continuous attractor dynamics: This needs the interaction kernel between neurons to be fine-tuned to produce a continuum of fixed points. Our work uses insights from all four approaches and solves their limitations by contributing a unified model that elegantly combines their strengths. *We provide an optimization problem motivated from a normative perspective that does not require privileged access to supervised position information, such that networks trained on the optimization problem reliably learn multi-modular periodic representations and generalize significantly beyond their training distribution.* Pdf:
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Sparse Deep Learning for Time Series Data: Theory and Applications
Accept (poster)
Summary: This paper introduces an expansion of sparse deep learning theory, specifically tailored for the analysis of time series data. The authors provide a detailed explanation of the theoretical foundation behind this theory, employing recurrent neural networks (RNNs), particularly long short-term memory (LSTM) networks. The authors demonstrate that their novel approach yields competitive outcomes when compared to other cutting-edge methodologies from diverse research domains, including conformal prediction. Strengths: The paper is well-written and grammatically correct with no evident typos. The narrative is clear: related work, theoretical grounding, computation and experiments. It is clear that the authors have well investigated the theory of sparse deep learning and it is evident that they know it quite well the field. Hence, the derivations, in principle, seem correct — although I must admit that I found the ideas and derivations presented in the paper to be somewhat challenging to grasp, even though I devoted a considerable amount of time to understanding them. Plus, newcomers to the sparse deep learning theory can find the complete derivations in the Appendix, which is quite extensive. The proposed method is tested in three relevant tasks related to time series data, and it is evaluated on at least four real-world datasets. This thorough evaluation approach provides strong empirical evidence to support the theoretical foundations of the method, demonstrating its validity and applicability. Weaknesses: Before outlining the weaknesses, I repeat again that I am a new reader of sparse deep learning theory. However, I still find that the paper might exhibit some weaknesses: 1. First of all, while I believe the theoretical side of the paper is of good quality, I encountered difficulties in understanding the concepts due to the overwhelming presence of complex notation. Sometimes there are many variables, and many subscripts, which in temporal series can be, e.g. $y_{k-1:k-R_l}$. This makes the reading difficult and sometimes cumbersome. 2. After reading the paper, I still cannot see why do you choose conformal prediction as the main comparison field. I lack some more context or motivation on the paper. The paper might be already interesting for the natural extension of sparse deep learning for time series data, but I also believe there should be some kind of justification of why these methods are a good alternative. 3. The point above leads me to the following point: I worry that the literature review for Conformal Prediction might be scarce. Authors comment on the work by [1], but there is no reference to other works in the same line of this work [3], or other contemporary works such as [2]. 4. And following this line, I wonder if this theory might be very challenging to apply in real-world scenarios compared to CP. The beauty of CP is that you can basically apply it on top of any ML model in a “plug-and-play” fashion, with the cost of very strong assumptions such as the ****************exchangeability**************** assumption. But it is more straightforward to apply and to easily understand than this sparse deep learning technique. I would like to know what the authors think about this. [1] Isaac Gibbs and Emmanuel Candes. Adaptive conformal inference under distribution shift. Advances in Neural Information Processing Systems, 34:1660–1672, 2021. [2] Barber, R. F., Candes, E. J., Ramdas, A., & Tibshirani, R. J. (2023). Conformal prediction beyond exchangeability. *The Annals of Statistics*, *51*(2), 816-845. [3] Romano, Y., Patterson, E., & Candes, E. (2019). Conformalized quantile regression. *Advances in neural information processing systems*, *32*. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Here are some questions/suggestions for the authors: - In Figure 1, the circles shouldn’t contain some variables or some naming? Are these the states of the RNN? This figure is a bit misleading to me. - In line 239, what is ******local-trap issue?:****** A bit more context could help the reader. - Is there any reason why you choose LSTM against other architectures? Some study on whether this applies to GRU or other RNN could be interesting. - Do you believe that the LSTM is providing good results because of the forget gate? Or do you think there is some alignment between your results and the use of LSTM, that might actually shadow the developed theroy? - In Table 3, when considering the autoregressive order selection, it appears that the errors of the proposed model are comparable to those of the baselines that only employ RNN. Notably, the absence of standard errors in the results raises some questions. It is possible that I may have overlooked something, or perhaps the authors could provide additional clarity regarding the significance of these findings. One noteworthy aspect is whether the reduced number of hidden links is the primary factor contributing to the performance of the proposed method. Minor questions: - What is PI length? Prediction Interval length? It is just presented in the table with no extra explanation. - Why the method is called PA? When do you choose the name? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors did not explicitly address the limitations of their work, which raises concerns about the feasibility of deploying their proposed model in practical settings. It is crucial to understand the potential challenges and constraints associated with implementation. Considering the computational time and power required, it is essential to evaluate the trade-off involved and compare it to the relatively inexpensive nature of conformal prediction. Understanding these aspects would provide valuable insights into the practical implications and potential drawbacks of the proposed model. Plus, I lack some direct comments on the limitations with respect to conformal prediction, since my prior belief is that the their presented comparison w.r.t conformal prediction methods is scarce. With all this in mind, I consider that the paper needs some further improvement, and I await for the author’s and other reviewers’ comments for the possibility of increasing the score. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you so much for your review. We provide a point-by-point response to your comments below. $\textbf{W1: Complex notation}$ In the revision, we will simplify the notation and add more explanations to the assumptions and results, making the paper more accessible to the readers. $\textbf{W2: Conformal prediction baselines}$ We chose conformal prediction as the main comparison field because conformal prediction methods have gained significant popularity and adoption as a widely-used approach for uncertainty quantification in various domains, including time series data. By selecting CP as the main comparison field, our aim is to assess the performance of our proposed method against established and widely-recognized approaches for uncertainty estimation. In our experiments, we compared the proposed method with two state-of-the-art CP methods for time series data [1,4]. As suggested, we have included another recently published conformal baseline focused on time series (EnbPI V2) [5] for comparison in the single time series experiment (please see table below). We greatly appreciate the reviewer's input on the literature review. In the revised manuscript, we will provide a more comprehensive review, including references to [2,3]. It's worth noting that [1] presents an adaptive version of [3] specifically designed for data with potential distribution shifts, such as time series data. Additionally, [2] introduced a random swapping mechanism to address potentially non-exchangeable data. Their extension enables conformal prediction to be applied on top of a model trained with weighted samples. Their primary focus is providing theoretical justification of the gap in the coverage rate for the proposed method. | Methods | Coverage | Average PI Lengths (standard deviation) | Median PI Lengths (interquartile range) | | -------- | -------- | -------- | -------- | | PA offline | 90.06 | 20.19(3.23) | 20.59(1.82) | | AgACI online | 91.05 | 23.27(14.49) | 21.38(2.62) | | ACI $\gamma=0.01$ online | 90.21 | 22.13(8.50) | 20.68(2.70) | | ACI $\gamma=0$ online | 92.89 | 24.39(9.17) | 22.99(2.91) | | EnbPI V2 online [5] | 91.23 | 27.99(10.17) | 26.55(5.09)| $\textbf{W3: Our method applied in real-world scenarios compared to CP methods}$ We appreciate the reviewer's comments . While basic CP methods like split conformal [6] are easy to apply on top of any ML models to generate prediction sets with marginal coverage guarantee, the width or the usefulness of the prediction sets will depend on how well the ML model performs or how the non-conformity scores are defined. Our theoretical results provide convergence to the true model, so ML models will perform well under our assumptions. Moreover, advanced CP methods like [4] and [5] can require multiple runs and ensemble models, making them computationally intensive, especially for deep neural networks. In contrast, our sparse deep learning technique offers several advantages in real-world scenarios. Once the model is trained with the mixture Gaussian priors, which act as an alternative to L1 or L2 regularization, the prediction intervals become a byproduct of the training process. This means that no extra training or heavy computation is needed to produce prediction intervals. $\textbf{Q1: Figure 1}$ We apologize for the confusion. The circles in Figure 1 represent the hidden states of a multi-layer RNN. To avoid ambiguity, we will include explicit labeling of the variables in the camera-ready version to enhance clarity. $\textbf{Q2: What is local-trap issue?}$ Apologies for the confusion. In this context, the term 'local-trap' refers to the same phenomenon as 'local minima.' There has been substantial research examining the favorable properties of the optimization landscape for over-parameterized neural networks, e.g., [7], which aims to explain why gradient-based methods work well for highly non-convex neural network optimization problems. However, for our model, the mixture Gaussian prior introduces heavy penalties on parameters near $0$, so it is unclear how the optimization landscape would be affected and local traps could become an issue. That is the motivation behind applying the prior annealing approach. $\textbf{Q3: Is there any reason why you choose LSTM?}$ We chose LSTM as the main network in our experimental setup to facilitate fair comparisons with current state-of-the-art methods for tasks like time series forecasting and NLP. Many of these existing methods employ LSTM, and by using the same network architecture, we can better assess the effectiveness of our proposed method relative to these benchmarks. $\textbf{Q4: Table 3}$ Thank you for your feedback. In our table, we reported the corresponding standard deviations for all metrics except FSR and NSR. The reason for the absence of standard deviations for FSR and NSR is that their calculation requires variable selection results from all five datasets, making it infeasible to compute standard deviations for these metrics. If you require further clarification on any aspect, please feel free to let us know. $\textbf{Q5 and Q6: PI and PA:}$ Yes, you are correct. "PI length" stands for Prediction Interval length. To clarify this in the revised version, we will add a brief explanation of "PI length" in the table caption. We chose the name 'PA' (Prior Annealing) for our method because a crucial aspect of our algorithm involves annealing the mixture Gaussian prior. [1] Adaptive conformal inference under distribution shift. NeurIPS2021. [2] Conformal prediction beyond exchangeability. The Annals of Statistics, 2023. [3] Conformalized quantile regression. NeurIPS2019. [4] Adaptive conformal predictions for time series. ICML2022. [5] Conformal prediction interval for dynamic time. ICML2021. [6] Distribution-free predictive inference for regression. JASA2018. [7] The loss surface of deep and wide. ICML2017. --- Rebuttal Comment 1.1: Title: Update score Comment: I thank the authors for the detailed response. I believe that most of my concerns were properly addressed, and I checked that I shared some concerns with other reviewers who also worried/struggled to grasp the idea of the paper at certain parts because of the complex notation. ### W2: Conformal Prediction Baselines The authors correctly addressed my concerns and investigated other conformal methods for time series data. However, is there any reason why the authors omitted the comparison w.r.t [1]? I think this work would greatly improve with this comparison and would become more relevant for other researchers using conformal prediction since this work is a reference work for CP under no exchangeability assumption. ### W3: Comparison to CP methods in real-world datasets I have still doubts about how sound the method can be in real-world scenarios. I mean, conformal prediction becomes popular because it can be __easily__ adapted in any ML model. However, the proposed method by the authors must be tuned for so many hyperparameters, there is lots of theory that must be digested to fully understand the method, etc. So I believe it cannot be an easy tool to deploy in practice. I believe the authors properly answered this concern, but maybe some direct comments about the limitations of the method in practice: training time, hyperparameter sensibility for performance, etc.? Regarding the questions, I believe the authors properly addressed my concerns. Overall, I think the authors answered my concerns, as well as other reviewers' concerns. I update my score accordingly, and I await the discussion with the other reviewers for the final decision. I believe this paper can be interesting, as it constitutes the extension to the time domain for sparse deep learning theory. Thank you again to the authors for the detailed responses. # References [1] Conformal prediction beyond exchangeability. The Annals of Statistics, 2023. --- Reply to Comment 1.1.1: Comment: Thank you for your continued valuable feedback! Your insights and comments have been instrumental in improving our paper. We sincerely appreciate your time and effort in reviewing our work. ## W2: Conformal Prediction Baselines We greatly appreciate and value your opinions. In response to your suggestion, we have included NexCP [1] as an additional baseline for comparison in our revised manuscript. NexCP [1] extends the original CP methods [2] to non-exchangeable data by allocating predefined and fixed weights, represented as $\\{w_i\\}^{n}\_{i=1}$, to each data point within the calibration set denoted by $\\{z\_i = (x\_i, y\_i)\\}\_{i=1}^{n}$, where $w_1,\\dots,w\_n \\in [0,1]$. These weights play a crucial role in the method. Intuitively and theoretically, higher weights $w\_i$ are assigned to data points that are considered more "trustworthy," implying they originate from a distribution closely related to the test point $z\_{n+1} = (x\_{n+1}, y\_{n+1})$. For instance, when a data point $z\_i$ corresponds to a specific time step $i$, the weights $w\_1 \\leq \\dots \\leq w\_n$ might be chosen to favor recent data, while attaching less significance to data from distant time periods. In section 4.3 of [1], it is acknowledged that the efficacy of their proposed method is influenced by the weight choices. However, they have left the optimal selection of weights, and even the quantification of optimality, for future exploration. In our application of their method to the dataset described in section 5.1, we opted to use the same weights that were employed in their experiments, specifically $w\_i = 0.99^{n+1-i}$. To ensure consistency, we followed the identical model and training procedures detailed in appendix G and section 5.1 of our paper. As a result of these additions, we have obtained updated results, which are presented in the table below. Additionally, we will provide all the relevant code for the newly introduced baselines in our revised manuscript. The code was adapted from the original implementations available in the published code bases [1,3]. | Methods | Coverage | Average PI Lengths (standard deviation) | Median PI Lengths (interquartile range) | | --------------------- | --------------------- | --------------------- |--------------------- | | PA offline | 90.06 | 20.19(3.23) | 20.59(1.82) | | AgACI online | 91.05 | 23.27(14.49) | 21.38(2.62) | | ACI $\\gamma = 0.01$ online | 90.21 | 22.13(8.50) | 20.68(2.70) | | ACI $\\gamma = 0$ online | 92.89 | 24.39(9.17) | 22.99(2.91) | | EnbPI V2 online | 91.23 | 27.99(10.17) | 26.55(5.09) | | NexCP online | 91.18 | 24.41(10.40) | 22.88(2.86) | [1] Conformal prediction beyond exchangeability. The Annals of Statistics, 2023. [2] Algorithmic learning in a random world, volume 29. Springer, 2005. [3] Conformal prediction interval for dynamic time. ICML2021. --- Reply to Comment 1.1.2: Comment: ## W3: Comparison to CP methods in real-world datasets A direct comparison between conformal methods and our method might appear slightly unfair. Conformal methods are post-training inference approaches, while our method is integrated, covering both model training and inference. In our method, hyperparameter tuning is limited to the model training phase, and the inference step is straightforward, involving the computation of the inverse of the Fisher information matrix and prediction intervals. We wish to emphasize that even for the inference part alone, conformal methods are not free from hyperparameters. For instance, when adapting conformal methods to time series data, [1] also introduces additional hyperparameters—specifically, the weights for each data point. Furthermore, [4] introduces the parameter $\gamma$ for their adaptive procedures. Additionally, it is worth noting that the performance of conformal methods can be significantly influenced by the model learned from the training data. In contrast, our sparse learning method theoretically ensures that the resulting neural networks are robust in terms of prediction. Finally, we would like to mention that we have indeed gained a significant amount of experience in hyperparameter tuning based on our examples: ### Hyperparameters for the mixture Gaussian prior: [i] $\\lambda_n$: Typically selected from the set $\\{1e-6, 1e-7\\}$, this hyperparameter exhibits minimal sensitivity within our method. Its tuning primarily involves adjusting sparsity in model sparsification tasks, if necessary. [ii] $\\sigma\_{1,n}^2$: This hyperparameter is generally chosen from $\\{0.5, 0.1, 0.05\\}$. As explained in section E of the appendix, it controls the degree of penalty in the free space, i.e., for parameters whose absolute values exceed the threshold values. For model sparsification tasks, particularly for extremely high sparsity regimes (i.e., $80\\% - 95\\%$), based on our experiments, a slightly larger value could lead to slightly better performance. Therefore, one can opt for $0.5$. However, it's worth noting that the performance of our algorithms is not significantly affected by this hyperparameter. For model selection and uncertainty quantification tasks, a value of $0.05$ or $0.1$ can be chosen without impacting the performance of our method. [iii] $(\\sigma\_{1,n}^{end})^2$: Our method exhibits low sensitivity to this hyperparameter as well, and it is typically selected from the set $\\{1e-6, 1e-7, 1e-8\\}$. A practical approach is to choose a value that closely aligns with the $(\\sigma\_{1,n}^{init})^2$ value. [iv] $(\\sigma\_{1,n}^{init})^2$: This is the first hyperparameter that can be considered as "$\\textbf{sensitive}$". For model sparsification tasks, this hyperparameter essentially determines the final sparsity. Please refer to our explanations in section E of the appendix for guidance on determining the value of this hyperparameter to achieve a specific target sparsity. For model selection and uncertainty quantification tasks, supported by both our theoretical findings (Theorem 3.8) and experimental results, this hyperparameter should be smaller for relatively larger models and larger for relatively smaller models. One can generally choose from the set $\\{1e-5, 1e-6, 1e-7\\}$. ### Hyperparameters for the prior annealing stage: Upon initial examination, it might appear that the prior annealing stage introduces an additional set of hyperparameters that require tuning, namely $c$ (temperature), $T\_1$, $T\_2$, and $T\_3$. However, the only hyperparameter that requires a modest degree of tuning is the number of training iterations that reduces $\\sigma\_{0,n}^2$ from $(\\sigma\_{0,n}^{init})^2$ to $(\\sigma\_{0,n}^{end})^2$, or equivalently, $T\_3-T\_2$. The sensitivity of this value is relatively low, as long as the count of model parameters whose absolute values lie below the current threshold (which is dependent on the present value of $\\sigma\_{0,n}^2$) remains stable—no abrupt spikes or declines. In terms of limitations, one potential concern pertains to the calculation of the inverse of the Fisher information matrix. For large-scale problems, the sparsified model could still retain a large number of non-zero parameters. In such instances, the computational feasibility of calculating the Hessian matrix, essential for prediction interval computations, might become compromised. Nonetheless, an alternative avenue exists in the form of the Bayesian approach, which circumvents the matrix inversion challenge. A concise overview of this Bayesian strategy is provided in section F of the appendix. [1] Conformal prediction beyond exchangeability. The Annals of Statistics, 2023. [4] Adaptive conformal inference under distribution shift. NeurIPS2021. --- Reply to Comment 1.1.3: Title: Thank you! Comment: Thank you once again for your thoughtful feedback, and we eagerly await any further insights you might have concerning these enhancements or any other aspects of our work.
Summary: This paper focuses on the theoretical analysis of sparse deep learning to time series data. Statistical propoerties of sparse RNNs are investigated including consistency and asymptotical behaviour. The paper presents some numerical results showing that sparse deep learning outperforms existing methods in predicting uncertainty quantification for time series data, as well as in identifying the autoregressive order for time series data and large-scale model compression. The proposed method has practical implications in some fields, such as finance, healthcare, and energy. Strengths: (1) The paper addresses an important and underexplored topic by studying sparse deep learning for dependent time series data. This fills a gap in existing research, which has mostly focused on i.i.d. data. (2) The paper presents the empirical results that show the superiority of sparse deep learning over existing methods in predicting uncertainty quantification and autoregressive order identification for time series data. This demonstrates the practical effectiveness of the proposed method. Weaknesses: (1)Some of the theorems in references are not given in the text and some symbols are confused, e.g., 1)Please give the concrete content of Theorem 2 from [1] that the authors cite. 2)Please give the definition of $O_{P^*}$ on line 174. 3)The lemma used on line 575 should be lemma S1 of section 4 in [2]. The modifications of the above weaknesses can enhance the readability of the article. (2)There are related references that study the theoretical analysis of deep neural networks for temporally dependent observations, such as [3]. I encourage the authors to check them out and see if they should be included in the related work. [1]Pentti Saikkonen. Dependent versions of a central limit theorem for the squared length of a sample mean. Statistics & probability letters, 1995. [2]Yan Sun, Qifan Song, and Faming Liang. Consistent sparse deep learning: Theory and computation. Journal of the American Statistical Association, 2021. [3]M. Ma and A. Safikhani. Theoretical analysis of deep neural networks for temporally dependent observations, in NeurIPS, 2022. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: (1)The authors assume that the time series are $\alpha$-mixing process, but in fact there are many other mixing processes, such as $\beta$-mixing and $\tau$-mixing. Please explain how the $\alpha$-mixing affects the theoretical analysis of algorithms. Whether the different mixing will cause different results? (2)Whether the data used in the experiment satisfy the $\alpha$-mixing? If yes, please provide some supporting materials. Some auto-regressive models are not $\alpha$-mixed, e.g.,\begin{equation}\label{1} \mathbf{x}_t=f_0(\mathbf{x}_{t-1})+\epsilon_t,~t\in \mathbb{Z}, \end{equation}, where $f_0:[-K,K]^d\rightarrow[-K,K]^d$ is a Lipschitz function with Lipschitz constant $\leq$ 2 and $\epsilon_t\sim\mathcal{B}(0.5)$ [4]. (3)Please give the proof of Lemma B.1 for completeness. (4)What is the biggest difference between the algorithm the authors proposed in section 4 and the algorithm proposed in [5]? (5)This article is too similar to [5] and [2]. It seems that the result of this paper is an extension from i.i.d. observations to dependent data. Please clarify what are the main difficulties in the theoretical analysis for time series data compared with i.i.d. data. [2]Yan Sun, Qifan Song, and Faming Liang. Consistent sparse deep learning: Theory and computation. Journal of the American Statistical Association, 2021. [4]D. W. Andrews. Non-strong mixing autoregressive processes. Journal of Applied Probability, 1984. [5]Yan Sun, Wenjun Xiong, and Faming Liang. Sparse deep learning: A new framework immune to local traps and miscalibration. Advances in Neural Information Processing Systems, 2021. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: This is a theoretical work. But part of the proof is a little vague such as proof of Theorem 3.11 and Theorem 3.12. Authors should highlight the key differences of analysis techniques between their work and [2] [5] (instead of just using the different tools). [2]Yan Sun, Qifan Song, and Faming Liang. Consistent sparse deep learning: Theory and Computation. [5]Yan Sun, Wenjnd computation. Journal of the American Statistical Association, 2021.un Xiong, and Faming Liang. Sparse deep learning: A new framework immune to local traps and miscalibration. Advances in Neural Information Processing Systems, 2021. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you so much for your review. We provide a point-by-point response to your comments below. $\textbf{W1: Readability: Theorems and Lemmas}$ Thank you very much for your thorough review of our paper. We sincerely appreciate your feedback, and in the camera-ready version, we will address the mentioned weaknesses as follows: (i) We will add the concrete content of Theorem 2 from [2] to the text for better clarity and reference. (ii) $O_{P^{*}}(a_n)$ is used to represents a sequence of random variables bounded in probability. Let $\\{X_n\\}$ be a sequence of random variables, and let $\\{a_n\\}$ be a sequence of strictly positive reals. We say $X_n/a_n$ is bounded in probability, if for every $\epsilon > 0$, there exists $M_\epsilon > 0$, such that $$P^{*}(|X_n/a_n| > M_\\epsilon) < \\epsilon$$ for all $n$, and we can write it as $X_n = O_{P^{*}}(a_n)$. We will add this definition in the revision. (iii) You are correct; the lemma used on line 575 should indeed refer to lemma S1 of section 4 in [4]. We will make this correction in the revised version of the paper. $\textbf{W2: Related references}$ Thank you for providing such useful information! We greatly appreciate your suggestion, and we will add the work [3] to the related work section for discussion. Upon careful reading of [3], we observed that their focus is on establishing consistency rates for prediction error in the context of temporally dependent observations. In contrast, our work takes a Bayesian approach, which offers distinct statistical properties, to address a similar but inherently different problem. Please also see our responses to reviewer AFD6. $\textbf{Q1: Different mixings}$ Our results apply to other types of mixing sequences as implied by the following hierarchy of the five mixing conditions: (i) $\psi$-mixing implies $\phi$-mixing; (ii) $\phi$-mixing implies both $\rho$-mixing and $\beta$-mixing; (iii) $\rho$-mixing and $\beta$-mixing each impliy $\alpha$-mixing. See reference [1] (Theorem 3.11) for theoretical justifications for the above hierarchy: $\textbf{Q2: Data with mixing property}$ Thank you very much for your suggestion. We use those datasets in order to fairly compare with current state-of-the-art methods. We conducted additional AR order selection experiments using the following exponential autoregressive order model from [5]. $$y_i = \left(0.8-1.1\exp\\{-50y_{i-1}^2\\}\right)y_{i-1} + \eta_i,$$ where $\eta_i \sim N(0,1)$ are i.i.d. Gaussian random noises. This model is shown to be $\alpha$-mixing according to [5], please see our new experimental details and results for this model in our responses to reviewer UrAN (W4). $\textbf{Q3: Proof of Lemma B.1}$ We will give the complete proof of Lemma B.1 for completeness in the revision. $\textbf{Q4: Algorithm}$ Please see our responses to reviewer AFD6 (W2). $\textbf{Q5: Novelty}$ Please see our responses to reviewer AFD6 (W1). [1] R.C. Bradley (2007) Introduction to Strong Mixing Conditions. Vol. 1. Kendrick Press, Heber City (Utah). [2] Dependent versions of a central limit theorem for the squared length of a sample mean. Statistics & probability letters, 1995. [3] Theoretical analysis of deep neural networks for temporally dependent observations. NeurIPS2022. [4] Consistent sparse deep learning: Theory and computation. JASA2021. [5] Identification of nonlinear time series: First order characterization and order determination. Biometrika, 77(4):669–687, 1990.
Summary: For iid data, sparse DL has been shown as a way towards consistency of input-output mappings and well understood distribution of model predictions. This theory is missing for time-series data however, which the authors thus introduce here. In particular, the following results for RNNs with Gaussian mixture parameters are presented: - Consistency of posteriors, structure selection and input-output mappings - (asymptotic) normality of predicted values Strengths: The paper was well organized and written. Extending the recent results from sparse DL work to time-series is valuable and a novel contribution. The consistency results presented here are convincing and valuable (at least to this reviewer who is not an expert in this field) and should pave way for a lot further work in the area -- the assumptions made in the theoretical parts are fair for this nascent area of study and provides a good starting point. The authors show how the method can be framed as regularization in the way of laplace approximation at the MAP estimator which is useful in practice. The experiments provide evidence of superior performance in comparison to CP methods. Weaknesses: Comparison to other possible approaches to uncertainty quantification is lacking: there is no comparisons or discussion of this method to the alternatives mentioned in the introduction: "include multi-horizon probabilistic forecasting [33], dropout-based methods [34], and recursive Bayesian methods [35].". This is the case with the experiments, but also on theoretical side. Naturally theory for the other methods may not exist in the same vain, but it would have been useful if the authors tried to make some further comparisons. One limiting assumption here is that $\mu*{\star}$ can be approximated by a sparse RNN arbitrarily well (i.e. ground-truth sparse model). Especially since the connectivity of this *true* RNN model is assumed to be limited. Also, as the authors point out, universal approximation does not hold fully (but can still hold for some nonlinear function classes). Relatedly, theorem 3.8 relates to the estimation error only, but what about the approximation error? The assumptions and conditions required for Theorems 3.8 onward are not discussed in sufficient depth. The AR order selection experiments, while promising, are quite narrow (only 5 datasets from a single model). More different datasets would be needed with different settings (of window sizes etc.) Technical Quality: 3 good Clarity: 3 good Questions for Authors: Minor things: There could be a bit more discussion about mixture priors: if and why they are a reasonable choice Consider adding a comma around in line 117 to make the meaning of "only" and "or" more clear in sentence "the input $x_i$ can contain only $y_{i-1}$ or $y_{i-1:i-r}$ for some r>1". Now it's not clear if only refers to "only either of a or b", or "only a, or if not a then b is the only alternative". I found it confusing that $H$ was used to denote the layers and $L$ the number of hidden neurons. I would have found them more logical the other way around. As a misc. note: I found the font of the paper odd, but that could be a local issue on my side. Confidence: 1: Your assessment is an educated guess. The submission is not in your area or the submission was difficult to understand. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: In general limitations are discussed fairly well -- especially given the limited space. As always, this type of theoretical works is limited by assumptions that are made in particular. Here e.g. Gaussian mixture prior is assumed and I would be curious to hear more about the limitations of this choice of prior in more detail. What alternatives are there? The mixing assumptions can be limiting too, but the authors do acknowledge that much. As mentioned above, it would also be nice to have bit more discussion on the assumptions and conditions of Theorems 3.8 and the other consequent theoretical results. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you so much for your review. We provide a point-by-point response to your comments below. $\textbf{W1: Comparison to other uncertainty quantification approaches}$ We apologize for the confusion. We actually included MQ-RNN [1] (multi-horizon probabilistic forecasting) and DP-RNN [2] (dropout-based method) as the two baselines for the set of time series experiments. We will make sure the baselines used in the experiments are clear by adding the corresponding citations in the table. $\textbf{W2: Approximation error}$ Apologies for the confusion. The convergence rate of Theorem 3.8 actually depends on the approximation error. It is essentially a sum of the approximation error and the estimation error, i.e., $\epsilon_n^2 = O(\varpi^2_n) + O(\zeta_n^2)$ in line 173 of the paper. Please also see our reply to W3Q3 to reviewer TwH7. $\textbf{W3: Discussions about Theorems 3.8}$ Thank you for suggestions. We will add more discussions about the assumptions and conditions. Additional assumption for theorem 3.9 is essentially an identifiability assumption, which is commonly used for variable selection. For theorem 3.11 and 3.12, additional assumptions and discussions are given in section C in appendix. $\textbf{W4: Additional AR order selection experiments}$ Thank you very much for your suggestion! As suggested, we have conducted additional AR order selection experiments with more window sizes (6 window sizes) using the following exponential autoregressive order model from [5]. $$y_i = \left(0.8-1.1\exp\\{-50y_{i-1}^2\\}\right)y_{i-1} + \eta_i$$ where where $\eta_i \sim N(0,1)$ are i.i.d. Gaussian random noises. We follow similar settings as section 5.2, i.e., generated 5 datasets, with the training sequence having a length of 10000, and both the validation and test sequences having a length of 1000. The results are given in the table below. | Model | Window size | FSR | NSR | AR order | #hidden link | MSPE | MSFE | | ----------- | ----------- | ----------- | -----------| ----------- | ----------- | ----------- | ----------- | | PA-RNN | 1 | 0 | 0 | 1(0) | 0(0) | 1.003(0.005) | 1.004(0.004) | | PA-RNN | 3 | 0 | 0 | 1(0) | 0(0) | 1.006(0.005) | 0.999(0.004) | | PA-RNN | 5 | 0 | 0 | 1(0) | 0(0) | 1.007(0.005) | 1.000(0.004) | | PA-RNN | 7 | 0 | 0 | 1(0) | 0(0) | 1.006(0.005) | 1.000(0.003) | | PA-RNN | 10 | 0 | 0 | 1(0) | 0(0) | 1.002(0.006) | 1.002(0.004) | | PA-RNN | 15 | 0 | 0 | 1(0) | 0(0) | 1.002(0.007) | 1.001(0.004) | This nonlinear autoregressive order model has an AR order of 1. We use the same hyperparameters for all window sizes, and our method managed to achieve perfect results for all window sizes with respect to all model selection metrics. This might be because this is a relatively simpler model compared with the one we used in Section 5.2. We will add both the results and details of these additional experimental results in the revised version. $\textbf{Q1: Discussion about mixture priors}$ The mixture Gaussian prior can be viewed as a continuous relaxation of the spike-and-slab prior, which puts a point mass at 0. Compared to the spike-and-slab prior, a significant advantage of the mixture Gaussian prior is that it is differentiable, allowing stochastic gradient-based algorithm to be used for computation. And the mixture allows us to put enough weight on the neighborhood of 0, which essentially allows the prior to satisfy conditions (a)-(c) in section B of the appendix. $\textbf{Q2: Line 117}$ Thank you, we will add a comma in the revised version to clarify the meaning as you suggested. $\textbf{Q3: Notation H}$ We sincerely apologize for the confusion. We mainly followed the notations used in [3,4], but we will make sure it is more clear in the revised version. $\textbf{Q4: Font of the paper}$ We will have this issue addressed in the revision. [1] A multi-horizon quantile recurrent forecaster. arXiv preprint, 2017. [2] Dropout as a bayesian approximation: Representing model uncertainty in deep learning. ICML2016. [3] Consistent sparse deep learning: Theory and computation. JASA2021. [4] Sparse deep learning: A new framework immune to local traps and miscalibration. NeurIPS2021. [5] Identification of nonlinear time series: First order characterization and order determination. Biometrika, 77(4):669–687, 1990. --- Rebuttal Comment 1.1: Comment: Thank you for your answers -- I think they are reasonable and I would gladly see the paper published. Reading through all the reviews and rebuttals it is evident to me that I don't possess sufficient understanding of the mathematical details and relevant literature to fairly judge those aspects. Thus I have lowered the confidence of my review from 2 to 1.
Summary: This paper proposes to extend sparse deep learning in the context of time series data. This context is different from the classical one as samples are not *i.i.d.* but dependent. The paper shows theoretical results -- posterior consistency and asymptotic normality of the weights --, a computation method based on posterior annealing and some numerical experiments that showcase the performance of the method on real datasets. Strengths: - The paper extend sparse deep learning result to the challenging context of dependent data, applicable for time series. - The many reported results show improvement of the method compared to existing state-of-the-art methods and seems to yield interpretable models for time series. Weaknesses: - It is not totally clear what is really different between the results in [17] and the results presented here (see **Q1**). - The writing is not always clear (**Q2**), in particular for non-expert in sparse deep learning. It is not clear why `Eq.(4)` admits an non-empty set of minimizers (**Q3**). - There are many hyperparameters in the methods and while there value is reported, it is not clear how to select them (**Q4**). Technical Quality: 3 good Clarity: 2 fair Questions for Authors: - **Q1** - The theoretical results are very similar to the ones of [17], which considers *iid* data with DNN. As the proposed method consider the input of the network as fixed segments $x_{i:i-M_l+1}$, the considered networks can be assimilated as weight-tied DNN, so the class of architecture is not really different. And in `Theorem3.8`, it is not clear where the dependence between the samples intervenes. In particular, I am surprised that we get similar sample efficiency rates between the two settings, and I wonder what is lost by dropping the iid assumption. Could the authors comment on this point? - **Q2** - Many details are deferred to the appendix or referred to in other articles. This makes the paper sometimes hard to read. In particular, `Section.4` is very hard to follow, in particular the part on the transformation of the Bayesian method into a regularization method. As this constitutes the core of the proposed method, I think this could be better explained in the main article. - **Q3** - In equation (4), it is unclear what are the conditions for $\varpi_n$ so that the argmin is not empty. In particular, as the size of the neural network in the set $\mathcal G_n$ is constrained in `Assumption3.4`. This specific assumption that there exists a minimizer is used in Eq. bellow l.657, which relies on the fact that for all $n$, $\|\mu(X, \beta, \gamma) - \mu^*\|_{L^2(\Omega)} \le \varpi_n$ is not empty. This is not guaranteed I think, in particular as $\varpi_n$ goes to 0 at the same rate as the network size increases, while the data point to interpolate increases much faster. Note that a similar question arises in the proof for [17]. It seems that the results in [17] show that universal approximation holds for a specific class of functions even for sparse DNN, but it is unclear if the constraints in `Assumption3.4` allows for this. But maybe this is due to a misunderstanding on my side. - **Q4** The method relies on many hyperparameters, that are specified in the appendix. However, the process for their selection is not reported, and it would be nice to understand how sensitive the hyperparameter selection of the model is. **Minor remarks and typos:** - **m1** - `Section 5.2` - Isn't it possible to estimate the AR order of window size =1 by looking at the average dependency length? Or at least estimating if changing one sample changes the result? - **m2** - The proposed results seem to relate to the compress sensing and sparse coding theory, it would be nice to draw the connection between these two fields in the introduction. But I have no specific results in mind, so maybe it is too far-fetched. *unnumbered remarks don't call for answers* - Eq. (2) - Missing $+ v^{H_n}z^{H_n}_{i-1}$ at the end to show the structure that is not just feedforward? - l.123 - $\gamma = \{\gamma_i \in \{0, 1\} : j=1\dots K_n\}$ -> $\gamma \in \{0, 1\}^{K_n}$? - l.155 - an RNN of size ... has been" -> "... is". - l.192 - mismatch between name and acronym `MIPP` and `marginal posterior inclusion probability`. - l.218 - $H_n$ is already the number of layers (l.113), finding a different notation would help the reader follow the arguments. - l305 - "In particular, Our method" -> "our". Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: * see weaknesses. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you so much for your review. We provide a point-by-point response to your comments below. $\textbf{W1 and Q1: Novelty}$ The seminal work [1] established a general theoretical framework for studying the asymptotic behavior of posterior distributions and Bayesian estimators for high-dimensional statistical models. The posterior consistency theory established therein requires three conditions for the models and the prior, namely: (a) The set of models considered in the analysis cannot be too large (in terms of packing numbers). (b) The set of models almost encompasses the support of the prior distribution. (c) The prior distribution places a sufficient amount of mass near the true model. The work [2] further extended this theory to non-i.i.d data under the three general conditions, leading to a similar posterior contraction rate. Dropping the i.i.d assumptions disables many inequalities used for the i.i.d case, necessitating specific considerations. However, the resulting contraction rates are not necessarily very different. In the same vein, we get a similar posterior contraction rate as reference [4] (cited in the paper). Please also see our responses to reviewer AFD6 (W1) $\textbf{W2 and Q2: Transformation of the Bayesian method}$ In the revision, we will add a lemma regarding applications of the Laplace approximation to Bayesian neural networks. Similar to Theorem 2.3 in [4], the main idea is to connect posterior consistency to the consistency of the maximum a posteriori (MAP) estimator. Then the prior annealing algorithm is employed for finding the optimum of the posterior. $\textbf{W3 and Q3: Equation 4 and approximation error}$ This is a very thoughtful question. Lemma 4.1 of [3], which, through the trick of independent block sequence construction, shows that many properties of the i.i.d processes can be extended to mixing processes. While the lemma was proven for the case of $\beta$-mixing, the author did mention her doubts about its applicability to $\alpha$-mixing. Therefore, at least for the sequences of $\beta$-mixing (which implies $\alpha$-mixing or can be viewed as a subclass of $\alpha$-mixing), the non-empty of the sparse RNN set can be guaranteed for many classes of functions. This issue will be mentioned in the revision. Moreover, the contraction rate of our posterior consistency results is essentially a sum of the approximation error and estimation error, i.e. $\epsilon_n^2 = O(\varpi^2_n) + O(\zeta_n^2)$ in Theorem 3.8, which provides the flexibility to be combined with other approximation theories to give get exact order of $\epsilon_n^2$. $\textbf{W4 and Q4: Hyperparameters}$ Thank you for your feedback. Regarding the four hyperparameters related to the mixture Gaussian priors $(\lambda_n, \sigma_{1,n}, \sigma_{0,n}^{init}, \sigma_{0,n}^{end})$, our algorithm is not sensitive to $\lambda_n$ and $\sigma_{0,n}^{end}$. $\lambda_n$ is only used for adjusting the target sparsity if needed, and it is usually selected from {1e-6, 1e-7}. $\sigma_{0,n}^{end}$ is generally set between {1e-5, 1e-6, 1e-7}, with the exact value depending on $\sigma_{0,n}^{init}$. We gradually decrease $\sigma_{0,n}^{init}$ to $\sigma_{0,n}^{end}$ during prior annealing. Based on the number of iterations/epochs and the learning rate used in this stage, the difference between $\sigma_{0,n}^{end}$ and $\sigma_{0,n}^{init}$ should be adjusted so that sparsity remains stable. Selecting $\sigma_{0,n}^{init}$ depends on the specific task. For example, as explained in Section E of the appendix, we select $\sigma_{0,n}^{init}$ by achieving the target sparsity for model sparsification tasks. For other tasks like model selection and uncertainty quantification, $\sigma_{0,n}^{init}$ is not as sensitive as long as sparsity is stable during annealing. Our algorithm is also robust to the temperature hyperparameter. We will add a section in the appendix instructing hyperparameter selection in the revised version. $\textbf{Minors 1:}$ Thank you for your feedback. Through our selected network structure, the AR order will not be directly available in the case window size=1. However, in practice, it is still possible to estimate the AR order by looking at the average dependency length. This point will be mentioned in the revision. $\textbf{Minors 2:}$ Thank you very much for your valuable suggestions! We will review compress sensing and sparse coding theory, and add them accordingly in the introduction. $\textbf{Typos}$ Thank you for your feedback. We will fix these typos in the revised version. [1] Convergence rates of posterior distributions. Annals of Statistics, 2000. [2] Convergence rates of posterior distributions for non-i.i.d. observations. Ann. Statist. 35(1), 192–223 [3] Rates of convergence for empirical processes of stationary mixing sequences. The Annals of Probability, 22, 94-116. [4] Consistent sparse deep learning: Theory and computation. JASA2021. --- Rebuttal Comment 1.1: Title: Answer to authors' rebutal Comment: I thank the authors for the detailed answers to my review. I have read the review by other reviewers. Overall, I think this is an interesting paper that is worth accepting at the conference therefore I will upgrade my rating to 7. I particularly appreciated the comment about the hybridation of algorithm vs data modeling, which I think is worth mentioning in the intro. Below are further comments about the answers. **Q1** - Thanks for the explanation. I think adding an explicit mention of this point in the manuscript will help understand the theoretical contribution of the paper. **Q3** - I am a bit lost by your answer but probably due to a lack of knowledge about $\alpha/\beta$-mixing. From what I understand, the non-emptiness can be ensured at least for $\beta$-mixing by constructing independent blocks and transferring the result from [17]. --- Reply to Comment 1.1.1: Comment: Thank you very much for your encouraging comments and for kindly raising the score. Q1R: As suggested, the comment about the hybridization of algorithm versus data modeling will be incorporated into the introduction of the revised manuscript. Q3R: Yes, your understanding is correct. The non-emptiness can be ensured at least for $\beta$-mixing by constructing independent blocks and transferring the result from [17].
null
NeurIPS_2023_submissions_huggingface
2,023
Summary: The paper extends the sparse DNN theory from [1, 2] to time series data and proposes the sparsity method for RNNs. This method aims to improve uncertainty quantification on time-series tasks and provide state-of-the-art RNN compression. Strengths: * The combination of DNN sparsification and uncertainty quantification for time series data is an interesting direction. * The theoretical groundwork seems complete and convincing. Weaknesses: **Novelty** * **Theory** - The theoretical results for RNNs and time series seem almost identical to [1, 2] aside from minor time series specifics. I appreciate that the authors explicitly discuss the connection to these works. However, I have concerns if the theoretical contribution is significant enough. * **Computation** - the prior annealing algorithm (PA) for model sparsification is identical to [2]. Constructing the prediction intervals for time series might be novel, but it is a minor contribution from my perspective. **Large-scale model compression** The authors state that one of their contributions is s.o.t.a. large-scale RNN compression method. 1. In my opinion, the scope of the model compression area is much larger than the weight sparsification. One should either reformulate "compression" to "weight sparsification" or compare with low-rank approximation, quantization, and knowledge distillation methods. 2. The proposed method is compared only with AGP[5] on PTB. The authors state that this area has problems with benchmarks and baselines. I agree that it makes the fair comparison challenging. On the other hand, I think the results on a single dataset against a single baseline from 2017 are not sufficient to support such a strong claim. I would consider at least one more dataset and compare with, e.g., GraNet[6], and some sparsification methods for RNNs: e.g., [7, 12]. Otherwise, please revise the claims and contributions. 3. (Minor) All experiments are entirely deferred to the appendix. If the authors declared this as one of the contributions, I would expect it in the main paper. **Related work** * **Uncertainty quantification:** please add DropConnect[3] and deep ensembles[4] as uncertainty estimation methods and consider them for comparison in the multi-horizon setting. * **Sparse DNN** should be largely extended by discussing different before-training [8], post-training [9] and during-training [5,6,7,10,11] approaches. Regarding the RNN sparsification, I would consider citing and discussing the following works [12, 13, 14]. **Clarity** It was hard to follow some theoretical parts of the paper due to large equations and overloaded notation, e.g., Theorem 3.8. I can see that the authors put effort into simplifying the notation and supporting the theory with discussions in some places. Nevertheless, I think it would be great to do an extra pass to make reading more clear. (Minor) Figure 1 takes a lot of space but just aims to explain M_l and R_l notation and the RNN setting. I would suggest making it more informative or maybe consider moving it to the appendix. The model compression results arguably seem more important than this figure. --- [1] Consistent sparse deep learning: Theory and computation. JASA, 2021 [2] Sparse Deep Learning: A New Framework Immune to Local Traps and Miscalibration, NeurIPS 2021 [3] Dropconnect is effective in modeling uncertainty of bayesian deep networks 2019 [4] Simple and scalable predictive uncertainty estimation using deep ensembles NIPS2017 [5] To prune, or not to prune: exploring the efficacy of pruning for model compression, 2017 [6] Sparse training via boosting pruning plasticity with neuroregeneration, NeurIPS2021 [7] Bayesian Compression for Natural Language Processing, EMNLP2018 [8] SNIP: Single-shot network pruning based on connection sensitivity, ICLR2019 [9] The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks, ICLR2019 [10] Learning Sparse Neural Networks through L0 Regularization, ICLR2018 [11] Scalable training of artificial neural networks with adaptive sparse connectivity inspired by network science, Nature 2018 [12] Spectral Pruning for Recurrent Neural Networks, AISTATS2020 [13] Structured Sparsification of Gated Recurrent Neural Networks, AAAI2020 [14] Stage-Wise Magnitude-Based Pruning for Recurrent Neural Networks. TNNLS2022. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: 1. Could the authors address the novelty and model compression evaluation concerns? 2. In my understanding, deep ensembles are s.o.t.a. for uncertainty estimation. Is it a reasonable baseline in the multi-horizon setting? Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 3 good Contribution: 3 good Limitations: The authors addressed the limitations to some extent. One can also discuss more limitations of the proposed algorithm. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you so much for your review. We provide a point-by-point response to your comments below. $\textbf{W1: Novelty}$ The seminal work [1] has established a general theoretical framework for studying the asymptotic behavior of posterior distributions and Bayesian estimators for high-dimensional statistical models. The posterior consistency theory established therein requires three conditions for the models and the prior given in section B of the appendix, lines 633-636. In words, these conditions require: (a) The set of models considered in the analysis cannot be too large (in terms of packing numbers). (b) The set of models almost encompasses the support of the prior distribution. (c) The prior distribution places a sufficient amount of mass near the true model. However, verifying conditions (a)-(c) for a specific problem is not trivial. For example, [2] addressed the issue for high-dimensional generalized linear models; [3] tackled the issue for deep neural networks with i.i.d. data; and [4] dealt with the issue for nonparametric Bayesian models. In this paper, we address the issue for RNNs with $\alpha$-mixing sequences. Although we employ similar conditions as in [3] to constrain the set of neural networks used in data modeling, verification of conditions (a)-(c) remains non-trivial. This is detailed in the Appendix of the paper. Notably, some inequalities used for i.i.d. data no longer hold, such as the inequalities on $d_t$ difference (a generalization of the Kullback-Leibler divergence) used for proving Proposition 1 in [2]. To our knowledge, this paper provides the first theoretical study of sparse RNNs for time series data from the perspective of structure selection, parameter estimation, and prediction uncertainty quantification. Given the wide usefulness of RNNs for time series data [11] , we believe that their theory deserves special attention. $\textbf{W2: Algorithm}$ Yes, the prior annealing algorithm is a direct application of the algorithm in [5]. Our main contribution is extending the theoretical results to RNN models for time series data. Additionally, our experiments demonstrate the advantages of the proposed method over prior works. $\textbf{W3: Model sparsification and baselines}$ Thank you for the suggestion. In the revision, we will change the term `compression` to `sparsification` as suggested. We have conducted an additional experiment comparing our model sparsification approach to the baseline method proposed in [6]. Following their experimental setup exactly, we trained a 1 layer RNN with 128 hidden units on PTB data, using the same batch size, number of epochs, and 5 independent runs. As shown in table below, our approach achieves better test perplexity than [6] under similar (and higher) sparsity levels. All baseline results are directly adopted from their paper. | Methods | Test Perplexity | Sparsity | | ----------- | ----------- | ----------- | | Baseline |114.66 (0.35) | 0% | | $\textbf{PA (ours)}$ | 117.80 (0.10) | 70% | | Spectral w/ rec [6] | 124.26 (0.39) | 67% | We will add both the results and details of these additional experimental results in the revised version. $\textbf{W4: Related work: uncertainty quantification}$ For the multi-horizon forecasting experiments, we followed the setup from [7] and used an LSTM as the base prediction model. Per your suggestion, we will add discussion of deep ensembles [8] and DropConnect [9] to the related work section. Those methods were originally developed for i.i.d. data like image classification. Therefore, adapting them to RNNs for multi-horzion time series forecasting would require careful tuning and an extension of their original methods with the Bonferroni correction to ensure a fair comparison. $\textbf{W5: Related work: sparse DNN}$ The lottery ticket hypothesis [10] shows that for many vision tasks, there exist sparse networks that can be trained from scratch to achieve good performance, but the models must have special initialization. The work shed light on research on pruning before training. During training approaches typically add regularization during training to force parameters to go to 0. Post-training approaches operate on trained neural networks and attempt to remove network parameters based on some pruning criteria such as parameter magnitude, Hessian of the loss function, etc. We will add discussion of these works in the revision. From this perspective, our work is mostly aligned with the during-training approach. $\textbf{W6: Clarity}$ In the revision, we will continue our efforts to simplify the notation and provide additional discussions for the theory to enhance the readability of the paper. $\textbf{W7: Figure 1}$ In the revision, we will move Figure 1 to the appendix and add more model compression results to the main body of the paper. [1] Convergence rates of posterior distributions. Annals of Statistics, 2000. [2] Bayesian variable selection for high dimensional generalized linear models: convergence rates of the fitted densities. The Annals of Statistics, 2007. [3] Consistent sparse deep learning: Theory and computation. JASA2021. [4] Nonparametric bayesian model selection and averaging. 2008. [5] Sparse deep learning: A new framework immune to local traps and miscalibration. NeurIPS2021. [6] Spectral pruning for recurrent neural networks. AISTATS2020. [7] Conformal time-series forecasting. NeurIPS2021. [8] Simple and scalable predictive uncertainty estimation using deep ensembles. NeurIPS2017. [9] Dropconnect is effective in modeling uncertainty of bayesian deep networks. Scientific reports, 2021. [10] The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks, ICLR2019. [11] Deep learning for twelve hour precipitation forecasts. Nature communications, 13(1):1–10, 2022. --- Rebuttal Comment 1.1: Title: Response to rebuttal Comment: I would like to thank the authors for their thoughtful clarifications and additional results. I’ve also read other reviews and responses and, as a result, got a bit more optimistic about the submission. **Novelty** Thanks for clarifications about the novelty of the theoretical contribution. As a practitioner, I struggle to judge if verifying similar conditions for the dependent data is sufficient for the main contribution. Thus, I would like to see the opinion of Reviewer riun, who also raised a similar concern. **Model sparsification** Thanks for the comparison with spectral pruning. I believe it makes the comparison slightly better. Having more datasets and methods for comparison would still be great, but I do not think it is a big problem since the main contribution is theoretical. Overall, I do not have strong objections to acceptance if other reviewers confirm the significance of the theoretical contribution against [1, 2]. If accepted, the paper should be largely revised to address reviewers’ suggestions and misunderstandings. I keep my score for now and will consider raising it after the discussions. [1] Sparse deep learning: A new framework immune to local traps and miscalibration. NeurIPS2021 [2] Consistent sparse deep learning: Theory and computation. JASA2021 --- Reply to Comment 1.1.1: Comment: Thank you for your encouraging words. Of course, we will fully take into account the comments and suggestions provided by the reviewers during the revision process. *Question about the novelty.* *Reply:* Other than our clarifications about the novelty of the theoretical contribution, we also want to elaborate our contribution in a broader context of statistical modeling. As discussed in [1], two distinct cultures exist for statistical modeling: the 'data modeling culture' and the 'algorithmic modeling culture'. The former focuses on simple generative models that explain the data, potentially lacking a consistent estimate of the true data-generating mechanism due to the model's inherent simplicity. The latter, on the other hand, aims to find models that can predict the data regardless of complexity. Our proposed method occupies a middle ground between these two cultures. It seeks to identify a parsimonious model within the realm of complex models, while also ensuring a consistent estimation of the true data-generating mechanism. From this perspective, our work and [2,3] represent a new culture as a hybridization of the 'algorithmic modeling culture' and the 'data modeling culture'. This hybridization holds the potential to expedite advancements in modern data science. To illustrate, an increasing number of authors have recently begun exploring ways to sparsify LLM models. In our limited experience, our method has demonstrated efficacy in this context as well. [1] L. Breiman. Statistical modeling: The two cultures (with comments and a rejoinder by the author). Statistical Science, 16:199–231, 2001. [2] Y. Sun, W. Xiong, and F. Liang (2021) Sparse deep learning: A new framework immune to local traps and miscalibration. NeurIPS2021 [2] Y. Sun, Q. Song and F. Liang (2021) Consistent sparse deep learning: Theory and computation. JASA2021.
null
null
null
null
null
null
Towards Stable Backdoor Purification through Feature Shift Tuning
Accept (poster)
Summary: Based on the observations that the previous FT and FP methods fail at low poisoning rate scenarios, this paper finds out the potential reasons in terms of clean and backdoor feature separation degree. It proposes FST to solve the problem by disentangling the features and validates its effectiveness with multiple backdoor attacks. Strengths: 1. The paper is well-written and easy to read. No typo is found. 2. The experiments are sufficient to demonstrate the findings of previous methods’ failure in low poisoning rate scenarios. And the experiments to verify the potential reason and validate the two initial methods are also convincing. 3. The proposed methods are intuitively reasonable, which are derived step-by-step from the observations of the pre-experiments to the simple solutions, and the final version with penalty. 4. The idea is simple yet effective as shown in Sections 4 and 5, which is easy to follow. Weaknesses: In general, I appreciate this paper, but there are still some concerns from my perspective. 1. The implementation of the proposed methods is not publicly available, only the public library ‘BackdoorBench’ is provided. 2. The effectiveness of the constraint term C in equation (1) is expected to be discussed in the ablation study. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: 1. Do the dynamic backdoor attacks, such as WaNet[1] and IAB[2], which generate triggers according to the input, follow the same phenomenon as in Figures 1 and 2? Since the WaNet is used in Section 5 while not in the pre-experiment part. [1] Nguyen, Anh, and Anh Tran. "Wanet--imperceptible warping-based backdoor attack." *arXiv preprint arXiv:2102.10369* (2021). [2] Nguyen, Tuan Anh, and Anh Tran. "Input-aware dynamic backdoor attack." *Advances in Neural Information Processing Systems* 33 (2020): 3454-3464. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: Please see above. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **First of all, thank you for your recognition of our work.** ### Response to Weakness 1: Thanks for your comment. We will release all of our code, including the corresponding tuning parameters and training checkpoints, in our final version to ensure that the results of all our experiments are reproducible. ---- ### Response to Weakness 2: Thanks for this suggestion. As mentioned in Section 4, to stabilize tuning process, we add a constraint on the norm of $w$. To reduce tuning cost, we directly set it as $||w^{ori}||$ instead of manually tuning. They will be adjusted adaptively based on trained models. To show effects of our constraint, we offer the performance of FST's entire tuning process on CIFAR-10 and ResNet-18, with or without the projection term. The results are shown in **Figure 4 of one-page PDF**. The blue and purple lines represent results with or without projection. The different line type means various poisoning rate. We could clearly observe that the projection stabilizes tuning process of FST, leading to significant convergence improvement. FST quickly converges in a few epochs while achieving good robustness and clean accuracy. We will add this study to our revised version. ---- ### Response to Question 1: Thanks for your helpful comment. Following settings in Section 3.1, we add evaluations of WaNet and IAB attacks on CIFAR-10 with ResNet-18. The results are shown in **Figure 1 of one-page PDF**. For IAB attack, Vanilla FT and LP could purify backdoored models for high poisoning rates but fail to defend against low poisoning rates attacks. WaNet can not achieve 100% ASR in low poisoning rate settings even without defense. Therefore, vanilla FT and LP could also defend against it like other defense methods shown in Section 5.2. In response to Reviewer chxU, we also extend evaluations to more models, attacks, and datasets in **Figure 3 of one-page PDF**. The results are consistent with the findings in Section 3.1. ---- We hope that the above answers can address your concerns satisfactorily. --- Rebuttal Comment 1.1: Comment: Thanks for your response. I am satisfied and keep my score. --- Reply to Comment 1.1.1: Title: Thanks! Comment: We appreciate your further response and your recognition of our work. We will include our discussion in the revised version. The authors
Summary: This paper studies the effectiveness of finetuning in backdoor defense with a low poisoning rate and finds that the feature entanglement at low poisoning rate affects the effectiveness of finetuning. Thus, this paper proposes 3 new finetuning strategy FE-tuning, FT-init, and FST. The experiments demonstrate a promising result. Strengths: 1. The finetuning study is interesting. 2. FST only uses finetuning to achieve the best backdoor removal results, which is very impressive. 3. This paper is well written and easy to follow. Weaknesses: Although this work is interesting, I still have several concerns. 1. How much data is used in FST? From my point of view, the reinitialization and the larger difference loss needs more data. But if the amount of data is too large, the defenders can use these data to retrain a new model. 2. As demonstrated in the paper, only finetuning the feature extractors cannot remove backdoor successfully. If so, it demonstrates that there is poison in the linear layers, can you explain this phenomenon? 3. Can you provide more TSNE results against more attacks like Figure3 such as BadNets and WaNet? 4. Although this paper focuses on a low poisoning rate, I think results on poisoning rate such as 20% should be also considered. Because the defenders cannot know the real poisoning rate and the robustness of different poisoning rates is important. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: 1. How much data is used in FST? From my point of view, the reinitialization and the larger difference loss needs more data. But if the amount of data is too large, the defenders can use these data to retrain a new model. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 3 good Contribution: 4 excellent Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **First of all, thank you for your recognition of our work.** ### Response to Weakness 1 & Question 1: **Response to “How much data is used in FST?”:** As mentioned in Line 238-241 of Section 5.1, we follow previous work and only reserve either 2% or 5% of training data as the tuning dataset for both CIFAR-10 and GTSRB or CIFAR-100 and Tiny-ImageNet. The clean tuning sets consist of 1000, 768, 2500, and 5000. These subsets are relatively small in size. We will highlight details of our tuning dataset in the revised version. We also evaluate FST under a rigorous scenario with limited clean tuning samples in Section 5.3. We reduce the size of CIFAR-10 tuning set from 2% to 0.1%, which is around 50 samples. As shown in Figure 5 (c,d), FST consistently performs well across various tuning data sizes, even when the tuning set only contains 50 samples. **Response to “From my point of view, the reinitialization and the larger difference loss needs more data.”:** We think that the following points can help FST achieve a better trade-off between ACC and ASR based on very small tuning datasets: 1. The projection constraint $||w||_2 =C$ shrinks the range of feasible set and stabilizes the tuning process. As mentioned in Line 203-219 of Section 4, to avoid the $w$ exploding and the inner product dominating the loss, we add a constraint on $||w||$ and constrain it as $ ||w^{ori}||$ rather than manual hyperparameter. It limits feasible set on the $\ell_2$ norm ball ($ ||w^{ori}||$), which stabilizes tuning process of FST and leads to significant convergence improvement. As shown in Efficiency analysis of Section 5.3, Our method FST quickly converges in a few epochs while achieving good defense performance and clean accuracy. 2. when tuning on small tuning sets, the original CE loss term in our objective (Eq.(1)) help tuned model maintain clean accuracy. Compared with FE-tuning without CE loss term, FST achieves better clean accuracy and defense performance. 3. We only reinitialize and add difference loss to linear head. It only involves a small portion of parameters of the entire model, and feature extractor is already initialized with good accuracy. ---- ### Response to Weakness 2: **Response to “As demonstrated in the paper, only ...... cannot remove backdoor successfully.”:** Actually, we do not use tuning methods that only finetune feature extractors. We speculate that the reviewer is referring to the FE-tuning. As described in Line 169 of Section 3.2 and shown in Figure 1, FE-tuning first randomly re-initialize the linear classifier and then only tune feature extractor. We apologize for any confusion caused. As suggested by Reviewer 2qbe, we will add detailed explanations about FE-tuning in Line 52 in our revised version. **Response to “If so, it demonstrates that there is poison in the linear layers, can you explain this phenomenon?”:** 1. FE-tuning randomly re-initializes linear classifier, so there are no poisons in new linear classifier. However, it still fails to completely eliminate backdoor triggers under various settings. The reason is that for low poisoning rate attacks, FE-tuning did not effectively shift learned features. The random initialization of linear classifier may not be sufficient to guide enough shifts on learned features. This is also why we have proposed FST with adding an extra penalty on linear classifier ($\alpha\*<w,w^{ori}>$). It encourages discrepancy between tuned and original backdoor classifier and also further promotes feature shifts, as mentioned in Section 4. The stable defense performances under various settings also verify the effectiveness of FST. 2. As shown in Section 3.1, under high poisoning rate, LP achieves good defense performance. This demonstrates that we could only purify backdoors in linear head to defend against attacks when backdoor features are clearly separable from clean features under high poisoning rate settings. However, as mentioned in Section 3.2, we need more feature shifts to defend against low poisoning rate attacks. ---- ### Response to Weakness 3: Thanks for your suggestion. Following settings in Section 3.2, we provide TSNE results of BadNets and WaNet attacks. The results are shown in **Figure 2 of one-page PDF**. Under high poisoning rate (10%), the backdoor features (black points) are clearly separable from clean features (red points) and targeted clean and backdoor features are closer to each other under low poisoning rate (1%). This is consistent with results in Section 3.2. As shown in top row, consistent with observations of Blended attack in Section 3.2, vanilla FT still suffers from providing sufficient feature modification, leading to failed defense against BadNet attacks. WaNet fails to achieve 100% ASR in low poisoning rate settings without defense. Hence, its backdoor features are not very close to clean features, unlike Blended and BadNet attacks. Therefore, vanilla FT could also defend against it like other defense methods shown in Section 5.2. In response to Reviewer Zkrc, we also conduct visualization of adaptive attacks. As shown in **Figure 1 of one-page PDF**, FST still significantly shifts backdoor features and makes them separable from clean features. ---- ### Response to Weakness 4: Thanks for this constructive suggestion. To fully assess FST’s effectiveness, we take more evaluations of FST by increasing poisoning rate to 30% on CIFAR-10 and GTSRB with ResNet-18. The results are shown in **Table 2 of one-page PDF**. We can observe that FST still achieves stable and outstanding defense performance under high poisoning rate, reducing ASR below 2%. This further verifies the effectiveness of our method. We will add the suggested evaluations in our revised version. ---- We hope that the above answers can address your concerns satisfactorily. We would be grateful if you could re-evaluate our work based on the above responses. We look forward to receiving your further feedback. --- Rebuttal Comment 1.1: Comment: Thanks for your rebuttal and most of my concerns are solved. And I will keep scores and lean to accept this paper. --- Reply to Comment 1.1.1: Title: Thanks for your recognition of our work Comment: We sincerely appreciate your constructive feedback throughout the review process. We are committed to incorporating your suggestions as we revise the paper. We are delighted that our responses have addressed your concerns. Thanks for your recognition of our work! Best regards, The Authors --- Rebuttal 2: Title: Seeking Your Valuable Feedback Comment: Dear Reviewer X5wr, Thanks for spending time reviewing our work and providing valuable feedback! We have provided the response to your questions. We sincerely appreciate it if you could provide further feedback and comments on our response. Ensuring that the rebuttal aligns with your suggestions is of utmost importance. Your response is very helpful in further improving the quality of our work. Best regards, Authors --- Rebuttal Comment 2.1: Comment: Thanks to the authors for your response. @Reviewer X5wr: Does the rebuttal fully address your concerns? Best regards, Your AC
Summary: This paper observes that while fine-tuning and linear probing can act as effective defenses in the high-poisoning-rate regime, they completely fail in the low-poisoning-rate regime. The paper further shows that this is due to the fact that in the low-poisoning-rate regime, extracted features of backdoored and clean samples are highly entangled. The proposed defense is to fine-tune the backdoor model with an additional constraint to force the fine-tuned weights to differ from the original (poisoned) weights. Extensive experiments show that the proposed defense provides excellent backdoor purification while maintaining clean accuracy. Strengths: - This paper shows an interesting observation that fine-tuning and linear probing perform significantly worse as backdoor defenses in the low poisoning rate regime. - This observation is followed by a thorough and nicely written analysis in Section 3. - The proposed defense is simple, elegant, yet effective. - Very extensive experiments. Weaknesses: Please pay more attention to weaknesses marked with "major" severity. - **[minor, clarity]**: L52, please also mention that FE-tuning randomly re-initializes the classifier head. I was confused when looking at Figure 1 at first because it was not mentioned previously that FE-tuning does re-initialize $f(w)$. - **[minor, typo]**: L102, "Here We" -> "Here, we" - **[major, lack of experiments on high poisoning rates]**: The proposed defense should be tested with high poisoning rates just to ensure that it performs equally well. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - It would be nice if the authors could extend Figure 2 to other datasets. Specifically, I am interested in seeing the results on Tiny-ImageNet and CIFAR-100. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: - One potential limitation of this work is the assumption of having access to clean data for training, which may not always be practical in real-world scenarios. However, the authors have acknowledged this limitation and expressed their commitment to addressing it in future research. - It would have been more insightful if the authors had delved deeper into why low poisoning rates cause entanglement between backdoored and clean features. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **First of all, thank you for your recognition of our work.** ### Response to Weakness minor 1 and 2: Thank you for your helpful suggestion! We apologize for the confusion caused. We will add explanations about FE-tuning in Line 52 in our revised version. We will correct all the typos and carefully polish the paper in the revision. ---- ### Response to Weakness major 1: Thanks for your valuable suggestion! To fully assess FST’s effectiveness, we take more evaluations of FST by increasing the poisoning rate to 30% on CIFAR-10 and GTSRB with ResNet-18. The results are shown in **Table 2 of one-page PDF**. We can observe that FST still achieves stable and outstanding defense performance under high poisoning rate setting, reducing ASR below 2%. This further verifies the effectiveness of our method. We will add the suggested evaluations in our revised version. ---- ### Response to Questions: Thanks for your valuable suggestion. We conduct evaluations of vanilla FT and LP on CIFAR-100 and Tiny-ImageNet with pre-trained SwinTransformer. As mentioned in Section 3.1, we mainly focus on defense performance with a satisfactory clean accuracy level (80%). We tune hyperparameters based on this condition. The results are shown in **Figure 3 of one-page PDF**. We also provide results on ResNet-50, and Dense-16 on CIFAR-10 and GTSRB. We could observe that Vanilla FT and LP could purify backdoored models for high poisoning rates but fail to defend against low poisoning rates attacks. The only exception is the SSBA results since the original backdoored models have a relatively low ASR, as mentioned in Section 3.1. These results align with our findings in Section 3.1. We will add suggested evaluations in our revised version. ---- ### Response to Limitation 1: Thanks for your comment. 1. We utilize a very small clean subset to conduct tuning methods in this work. As mentioned in Section 5.1, we follow previous work and only reserve either 2% or 5% of the original training data as the tuning dataset for both CIFAR-10 and GTSRB or CIFAR-100 and Tiny-ImageNet. Additionally, in Section 5.3, we also evaluate FST under a rigorous scenario with much fewer clean tuning samples. We reduce the size of our CIFAR-10 tuning set from 2% to 0.1%, which is around 50 samples. As shown in Figure 5 (c,d), FST consistently performs well across various tuning data sizes, even when the tuning set only contains 50 samples. 2. In real-world scenarios, we may be able to utilize existing dataset filtering methods [1] to help us construct this small tuning dataset. In this work, our main objective is to develop more robust tuning defense methods. We will investigate constructing a tuning dataset in future work. ---- ### Response to Limitation 2: Thanks for your helpful suggestion! In the main submission, we mainly focus on further improving the defense performance of tuning methods against low poisoning rates attacks, after first observing that existing tuning methods fail to provide stable robustness. Here we first present some initial intuitions based on related work [2,3] and will try to investigate the reasons behind that and provide a detailed analysis in future work. Our intuition is that to learn a stable mapping between backdoor patterns in input and the target class, backdoor samples will lead the model to tend to differentiate between backdoor features and the clean features of the target class. Therefore, the former is not influenced by the latter and dominates in the model decision process, leading to the wrong classification. The high poisoning rate attacks thus lead to the obvious and easy separation in feature space. While keeping ASR, attacks with low poisoning rates lead to more stealthy backdoor features which are more close to clean features. [1]. Meta-Sift: How to Sift Out a Clean Data Subset in the Presence of Data Poisoning, Usenix 2023. [2]. Revisiting the Assumption of Latent Separability for Backdoor Defenses, ICLR 2023. [3]. Spectral signatures in backdoor attacks, NeurIPS 2018. ---- We hope that the above answers can address your concerns satisfactorily. --- Rebuttal Comment 1.1: Title: All concerns addressed Comment: Thank you for your response. Please consider adding your intuition for Limitation 2 in the revision. I am raising my rating to 7. --- Reply to Comment 1.1.1: Title: Thanks! Comment: We appreciate your further comment and your recognition of our responses. Thank you for this suggestion. We will include this discussion in Section 3 of the revised version. The authors
Summary: This paper finds that fine-tuning is less effective in defending against backdoor attacks with a low poisoning rate, due to the strong coupling between the clean feature and the backdoor feature. Therefore, a tuning-based backdoor purification method called feature shift tuning (FST) is proposed, which is simple and efficient. FST can effectively decouple the clean feature and the backdoor feature, and thus eliminate the backdoor in the victim model. Strengths: 1. This paper proposes a simple and effective backdoor elimination method, which achieves good results in a specific scenario where the poisoning rate of the backdoor attack is relatively low. Weaknesses: 1. This paper finds that fine-tuning has difficulty defending against backdoor attacks with a low poisoning rate when ResNet-18 is the victim model, but does not sufficiently demonstrate the generalizability of the problem. The same claim is not guaranteed to hold when the model capacity, model architecture, and dataset type change. 2. This paper lacks an explanation of why the method works and does not explore the effect of the range of poisoning rates on the victim model. 3. It is desirable to include backdoor elimination methods other than tuning-based methods as baseline methods as well to demonstrate the efficiency of FST. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: Please refer to weakness. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: It is suggested that the authors provide further explanation as to why the method worked. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **First of all, thank you for your recognition of our work.** ### Response to Weakness 1: Thanks for your constructive comments. We add evaluations of vanilla FT and LP using ResNet-50 (increased model capacity) and Dense-161 (different model architecture) on CIFAR-10. We also take evaluation on GTSRB (different dataset types). As mentioned in Section 3.1, we mainly focus on defense performance with a satisfactory clean accuracy level (92% on CIFAR-10, 97% on GTSRB). We tune hyperparameters based on this condition. The results are shown in **Figure 3 of one-page PDF**. We could observe that Vanilla FT and LP could purify backdoored models for high poisoning rates but fail to defend against low poisoning rates attacks. The only exception is the SSBA results since the original backdoored models have a relatively low ASR, as mentioned in Section 3.1. These results align with our findings in Section 3.1. We will add these suggested evaluations in our revised version. ---- ### Response to Weakness 2 and Limitations: **Response to “This paper lacks an explanation of why the method works”:** 1. In lines 220-231 on page 6, we explain why FST can provide better robustness and bring unified improvements for initial methods, FE-tuning, and FT-init. Additionally, in the final figure of Figure 3, we include T-SNE visualizations of the feature extractor after applying FST to confirm its effectiveness in purifying backdoor features. 2. As shown in Section 3.2, we first find that in low poisoning rates scenarios, entanglements between the backdoor and clean features make FT and LP fail to defend against attacks. We propose two initial methods, FE-tuning and FT-init, to promote backdoor feature shifts and make them easily separable from the clean features of the target class. Hence, the feature extractor will no longer confuse backdoor features with clean features of the target class in the feature space. This leads that the subsequent linear classifier is difficult to be misled by backdoor samples, resulting in more robust classification. The evaluation and visualization in Section 3.2 verify their effectiveness on backdoor purification. 3. However, these two methods still suffer from a clean accuracy drop or unsatisfied defense performance. To achieve unified improvement, we further propose FST which actively shifts features by encouraging the difference between the tuned and original classifier. Compared with FE-tuning, FST brings an extra loss term on $\min_{w} E_{(x,y)\sim D_{T}} L(f(w; \phi(\theta;x)), y)+ \alpha <w,w^{ori}>$ to update linear classifier $w$. It further promotes feature shift by penalizing classifiers more intensely and also maintains the models’ ACC with the original CE loss term. Compared with the FT-init, by adopting $\alpha <w,w^{ori}>$, FST encourages discrepancy between tuned $w$ and original $w^{ori}$ to guide more shifts on learned backdoor features. 4. Feature visualization in Figure 3 shows that our FST significantly shifts backdoor features and makes them easily separable from clean features of the target class. Comparisons with other defense methods in Section 5 verify the superiority of FST in defending against attacks. **Response to “does not explore the effect of the range of poisoning rates on the victim model.”:** Thanks for this constructive comment. In the main submission, we mainly focus on attacks with low poisoning rates (0.5%, 1%, 5%) and evaluate FST on them, since Vanilla FT and LP are ineffective in defending against them in our revisiting experiments. To fully assess FST’s effectiveness, we take more evaluations of FST by increasing the poisoning rate to 30% on CIFAR-10 and GTSRB with ResNet-18. The results are shown in **Table 2 of one-page PDF**. We can observe that FST still achieves stable and outstanding defense performance under high poisoning rate setting, reducing ASR below 2%. This further verifies the effectiveness of our method. We will add the suggested evaluations in our revised version. ---- ### Response to Weakness 3: Thanks for this valuable suggestion. To further assess FST’s effectiveness, we add another training defense method, ABL [1], which also performs well in BackdoorBench. Following settings of Section 5.1, we evaluate it in CIFAR-10 with ResNet-18. The results are shown in the below table. We could observe that FST achieves better and more stable performance than ABL under various settings with maintaining good clean accuracy. [1] Anti-backdoor learning: Training clean models on poisoned data, NeurIPS 2021. **CIFAR-10 and ResNet-18** |Attack|Poisoning rate|ABL(ACC/ASR)|FST(ACC/ASR)| |:-------:|:-------:|:-------:|:-------:| |BadNets|5%|89.65/0.08|93.17/0.00| ||1%|72.34/7.13|92.81/0.01| ||0.50%|71.59/9.97|93.63/0.02| |Blended|5%|86.36/0.74|92.87/3.07| ||1%|71.48/18.77|93.59/0.19| ||0.50%|71.82/49.66|93.15/0.06| |WaNet|5%|69.78/77.39|91.56/0.26| ||1%|72.08/31.17|91.83/0.51| ||0.50%|74.31/13.23|91.70/0.78| |SSBA|5%|73.44/24.61|93.48/0.27| ||1%|73.72/72.57|93.32/0.56| ||0.50%|73.33/26.29|92.97/0.04| |SIG|5%|89.31/2.91|93.24/0.02| ||1%|89.73/5.37|93.09/0.03| ||0.50%|74.07/50.60|93.18/0.01| |LC|5%|90.19/4.29|93.47/0.68| ||1%|67.41/8.83|93.51/0.30| ||0.50%|75.42/99.84|93.44/1.71| ---- We hope that the above answers can address your concerns satisfactorily. We would be grateful if you could re-evaluate our work based on the above responses. We look forward to receiving your further feedback. --- Rebuttal 2: Title: Seeking Your Valuable Feedback Comment: Dear Reviewer chxU, Thanks for spending time reviewing our work and providing valuable feedback! We have provided the response to your questions. We sincerely appreciate it if you could provide further feedback and comments on our response. Ensuring that the rebuttal aligns with your suggestions is of utmost importance. Your response is very helpful in further improving the quality of our work. Best regards, Authors --- Rebuttal Comment 2.1: Comment: Thanks to the authors for your response. @Reviewer chxU: Does the rebuttal fully address your concerns? Best regards, Your AC --- Rebuttal Comment 2.2: Title: Thank you Comment: Dear Reviewer chxU, We want to follow up to make sure that we can discuss any further questions/comments/concerns. Please let us know if we could do anything to resolve any questions or concerns between now and the end of the discussion period. The discussion phase is coming to an end. The authors
Rebuttal 1: Rebuttal: Dear Reviewers, Thank you for your recognition of our work! We sincerely appreciate all of your precious time and constructive comments. All these comments and suggestions are very insightful and beneficial for us to improve the quality of this work. We have responded to each review separately and hope our responses are helpful in addressing the reviewers' questions. We will carefully revise our manuscript by adding suggested evaluations, providing more detailed explanations, and fixing the typos. **we have attached a separate PDF file containing additional experimental results aimed at addressing the concerns of reviewers.** Best regards, The Authors Pdf: /pdf/7b609df1aef3ffc4afbd4aa9e716748075485d24.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: This paper proposes a novel backdoor defense approach called Feature Shift Tuning (FST) that actively promotes feature shifts to disentangle the learned features of a deep neural network. The authors conduct a thorough assessment of widely used tuning methods, vanilla Fine-tuning (FT), and simple Linear Probing (LP), and demonstrate that they both completely fail to defend against backdoor attacks with low poisoning rates. FST is an end-to-end tuning method that actively shifts features during fine-tuning by encouraging the difference between the tuned classifier weight w and the original backdoored classifier weight w_ori. FST significantly shifts backdoor features and makes them easily separable from clean features of the target class, which effectively mitigates backdoor attacks. In summary, the proposed FST method is a strong backdoor defense paradigm that can achieve stable improvements in both robustness and clean accuracy compared to other initial methods such as FE-tuning and FT-init. Strengths: - The proposed FST (Fine-tuning with Separated Transformations) method is a strong backdoor defense paradigm that can achieve stable improvements in both robustness and clean accuracy compared to other initial methods such as FE-tuning and FT-init. FST is an end-to-end tuning method that actively shifts features during fine-tuning by encouraging the difference between the tuned classifier weight w and the original backdoored classifier weight w_ori. This is giving a good reference to the research on this aspect. - FST disentangles the clean and backdoor features, which helps to purify the model. It significantly shifts backdoor features and makes them easily separable from clean features of the target class, which effectively mitigates backdoor attacks. FST outperforms other methods by a large margin for backdoor robustness on CIFAR-10 and GTSRB datasets. - FST is effective against adaptive attacks such as the Adaptive-Blend attack, achieving an average drop on ASR by 81.66% and 91.75% respectively for the CIFAR-10 and GTSRB datasets. FST is a unified method that addresses both the clean accuracy drop and unsatisfied robustness improvements that are present in other backdoor defense methods. The ablation study is organized well to clearly demonstrate the whole proposed method. And it makes the paper easy to follow. Weaknesses: - [Contribution and novelty] The contribution of paper is somehow incremental, as the method is developed based on previously researched findings. The backdoor attack can be erased by decoupling the feature extractor and the linear classifier in [1]. - [Adaptive Attack] There, the submission only briefly mentions an attempt at achievingthe disentanglement between the clean and backdoor features through an adaptive attack, but fails to do so effectively, and provides only limited insight why the adaptive attack failed. Please take a closer look at providing a strong evaluation of the adaptive attack scenario. Besides, the paper [2] also proposes another defense, Adaptive-Patch, which should also be discussed when attacking the propsoed FST. [1] Huang, Kunzhe, et al. "Backdoor Defense via Decoupling the Training Process." International Conference on Learning Representations. 2021. [2] Xiangyu Qi, Tinghao Xie, Yiming Li, Saeed Mahloujifar, and Prateek Mittal. Revisiting the assump405 tion of latent separability for backdoor defenses. International Conference on Learning Representations. 2023. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Listed in the weakness of the paper. Score can be improved if concerns listed above are resolved. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors addressed the limitation that they assume the defender would hold a clean tuning set which may not be feasible in certain scenarios. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **First of all, thank you for your recognition of our work.** ### Response to Weakness 1: **Response to “The contribution of paper ...... previously researched findings.”:** 1. The authors of [1] propose that end-to-end supervised training makes the model learn backdoor features. Hence, they propose a two-phase training defense method called DBD. DBD utilizes self-supervised learning (SSL) to train feature extractor and then trains a linear classifier with the frozen feature extractor. 2. In contrast to DBD, our work focuses on how to efficiently purify existing backdoor triggers during tuning process, rather than training process. We first investigate if common FT methods in the pretrain-tuning paradigm, such as vanilla FT and LP, can consistently provide robustness against attacks across various settings. **We find that**: 1) under high poisoning rate setting where backdoor and clean features of targeted class are well separable, simple LP could provide satisfactory defense performance; 2) under low poisoning rate setting, backdoor and clean features are tangled together (Figure 2 of Section 3.1). As a result, FT and LP fail to purify inserted backdoors due to insufficient feature shifts. While DBD shows that feature extractor can learn backdoor features, it does not discuss how learned features vary at different poisoning rates, particularly in terms of separability between clean and backdoor features. This is crucial for designing robust and stable defense methods for various attack settings. 3. Motivated by our findings, we propose two initial methods (FE-tuning and FT-init) to encourage separability between backdoor and clean features. However, they still suffer from a clean accuracy drop or unsatisfied robustness improvements. To achieve unified improvement, we further propose FST based on them, a simple end-to-end tuning method. It actively encourages discrepancy between tuned and original classifiers to guide more shifts on backdoor features. **Response to “The backdoor attack can ...... and the linear classifier in [1].”:** 1. As mentioned above, FST is an end-to-end tuning method, which does not decouple the feature extractor and linear classifier like [1]. Our initial method, FE-tuning, adopts a decoupling form in tuning process. However, they still suffer from a clean accuracy drop or unsatisfied robustness improvements (Table 1 of Section 3.2). As shown in Section 5.2, FST achieves better and more stable defense performance under various settings. Compared with high training costs of DBD, FST is a much simpler and more flexible method that can be easily integrated with existing training methods and pretrained models. ---- ### Response to Weakness 2: Thanks for your suggestion. We first give reasons why we believe attacks [2] could be powerful adaptive attacks against our method. Then, we follow your suggestion and conduct a more detailed adaptive attack evaluation based on [2]. We finally provide explanations for why adaptive attacks failed. 1. As shown in Section 3.1, under high poisoning rate, backdoor features are clearly separable from clean features of targeted class and thus could be easily purified by vanilla FT and LP. To bypass defense methods based on latent separation property, Adaptive poisoning attacks [2] actively suppress the latent separation between backdoor and clean features. So that they can achieve a high ASR by adopting extremely low poisoning rate and adding regularization samples. This also corresponds to our finding in Section 3.2 that entanglements between clean and backdoor features in low poisoning rate settings make FT and LP fail. The evaluations in [2] also show that these attacks successfully bypass existing strong latent separation based defense. Hence, we believe it is also equally a powerful adaptive attack against our FST method. 2. Following the reviewer’s suggestion, we add evaluations of Adaptive-patch attack on CIFAR-10 and ResNet-18. To further reduce latent separability and improve adaptiveness against latent separation based defenses, we also use more regularization samples, following ablation study of Section 6.3 [2]. We show results in **Table 1 of one-page PDF**. We can observe that even against more stealthy adaptive attacks, FST still achieves outstanding defense performance. 3. Our work and [2] mainly focus on practical data-poisoning backdoors. To further assess stability of FST, we also test FST against training-control adaptive attacks [3]. The authors [3] utilize an adversarial network regularization during training process to minimize differences between backdoor and clean features in latent representations. Since the authors do not provide source code, we implement it based on descriptions in original paper. The results are shown in **Table 1 of one-page PDF**. FST still significantly reduces ASR. This further proves the excellent stability of our method. 4. To explain why adaptive attacks fail, we provide TSNE visualizations of learned features from backdoored models, as shown in Section 3.2. We show results in **Figure 1 of one-page PDF**. We can first observe that adaptive attacks significantly reduce latent separability. Clean and backdoor features are tightly tangled. FST effectively shifts backdoor features and makes them easily separable from clean features of the target class. Therefore, the feature extractor will no longer confuse backdoor features with clean features of target class in feature space. This leads that subsequent simple linear classifier is difficult to be misled by backdoor samples, resulting in more robust classification. We will include adaptive attack evaluations in the revision. [3]. Bypassing backdoor detection algorithms in deep learning, Euro S&P. ---- We hope that our answers can address your concerns satisfactorily and improve the clarity of our contribution. We would be grateful if you could re-evaluate our paper. We look forward to receiving your further feedback. --- Rebuttal Comment 1.1: Comment: Thanks for the authors' detailed response. This work presents convincing results but does not surprise me due to the novelty. Given all, I maintain my score. --- Reply to Comment 1.1.1: Title: Thanks for your response Comment: Thanks for your recognition of our work and feedback on our response. We will include your valuable suggestions and our discussion in the revised version. We will further clarify our contributions and the technical novelty in the revised version. Thanks The authors
null
null
null
null
null
null
Investigating how ReLU-networks encode symmetries
Accept (poster)
Summary: The paper considers networks with ReLU activations and investigates in which specific way their internal activations learn to be equivariant given an invariant data distribution. Section 2 presents a theoretical result, applying specifically to two-layer networks with non-singular weight matrices. It is shown that the network's equivariance requires 1) that its input and output activations are acted on by scaled permutation representations, and 2) that the network is layerwise equivariant with a related scaled permutation representation acting on the hidden features. Example 2.1 clarifies that this result does not necessarily hold in the case of non-singular weights. The third section proposes the conjecture that ReLU-CNNs that are trained on a reflection-invariant data distribution are close to regular GCNNs, i.e. GCNNs whose group representations are permutation representations. This conjecture is based on Entezari et al.'s conjecture that the activations of neural networks can always be permuted such that the networks are closely connected in the loss landscape. The authors argue that there always exists a CNN with exactly flipped kernels at initialization, which is due to the assumed invariant distribution preserved throughout training. Since these nets can be Entezari's conjecture be aligned via channel permutations, it follows that the CNN has either invariant kernels or kernels which are related by reflections - they are hence layerwise equivariant regular GCNNs. The experimental section investigates whether these two results hold in practice, using an activation matching technique to find reflected kernels. The results suggest that layerwise equivariance holds indeed. Strengths: The paper is well written and gives insight into the nature of networks which are trained to be equivariant. This is an original idea since most publications on equivariant networks are rather investigating models which are by design constrained to be equivariant. The actual necessity of regular group convolutions is clarified by the submission. Where possible, the networks layerwise equivariance is undoubtably proven with an analytical approach. While this approach does not generalize to networks with more than two layers or singular weight matrices, the authors use a different approach to tackle these cases. All results and conjectures are supported empirically. The extent of the paper is comprehensive, containing multiple additions in the supplementary material. Weaknesses: The strong assumptions in the analytical result seem initially like a weakness, but are addressed by the alternative approach based on Entezari's conjecture. The argument that this conjecture implies layerwise equivariance seems reasonable, but could be explained better. I am also not sure about the conjecture applies in the first place, however, this is not the scope of the current paper. Technically, I do not see the necessity for the definition of the barrier in Eq. (4) exact at the parameter's average, instead of at the maximizing parameter value. Another weakness is that the authors are only considering reflection-equivariant GCNNs, but not more general symmetry groups. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Why is section 2 considering linear instead of affine layers? Do the results not hold when biases are summed? Lemma 2.2 claims that the diagonal scaling matrix would be required to have (strictly) positive entries, however, non-negative entries, including zeros, seem sufficient. The resulting set of matrices does then no longer form a group since invertibility is lost, but the original group is contained as subset. The "intertwiner group" should furthermore be renamed to "equivariance group" since intertwiners are by definition linear, which ReLU is not. The activation matching technique should be briefly explained in the current submission instead of only pointing to related work. The paper is currently only mentioning trivial and regular representations, however, everything should apply to more general quotient representations as well, which should be briefly clarified. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Limitations are clearly addressed in section 1.2. and societal impacts are not to be expected. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their valuable feedback and helpful review. > Why is section 2 considering linear instead of affine layers? Do the results not hold when biases are summed? This is an excellent question. We wrote it this way (1) for simplicity, (2) because Elesedy & Zaidi [9] use the same setting and (3) for the fact that Prop 2.3 deals with bias free layers. The discussion up to Section 2.1 works with biases as well, as does the Lemma 2.2 (as it regards ReLU and not the layers themselves). The paragraph after Lemma 2.2 also works fine. For Proposition 2.3 it gets more complicated. What makes the proof of 2.3 easy is that given an invertible linear layer $\ell:X\to Y$, $\ell(x)= Wx$, a group representation $\rho(g)$ on $X$ is transferred to a group representation $\alpha(g, y) =\ell(\rho(g)\ell^{-1}(y))=W\rho(g)W^{-1}y$ on $Y$ w.r.t. which $\ell$ is equivariant. If we consider an affine layer $\ell(x) = Wx + t$ with bias vector $t$, then we can play the same game, but $\rho(g)$ is not transferred to a group representation (linear action) but instead to the affine action $\alpha(g, y) = \ell(\rho(g)\ell^{-1}(y))= W\rho(g)W^{-1}y + (I - W\rho(g)W^{-1})t$ on $Y$. This means that we can not apply Lemma 2.2. We can however find a generalization as follows: **Lemma** If $A$, $B$ are $n\times n$ invertible matrices and $a$ and $b$ are $n$-vectors, such that $\mathtt{ReLU}(Ax + a) = B\mathtt{ReLU}(x) + b$ for all $x\in\mathbb{R}^n$, then $a=b=0$. *Proof:* Inserting $x=0$ yields $\mathtt{ReLU}(a)=b$. Inserting $x=A^{-1}a$ yields $\mathtt{ReLU}(2a)=B\mathtt{ReLU}(A^{-1}a)+b$ so that $b=B\mathtt{ReLU}(A^{-1}a)$. Inserting $x=-A^{-1}a$ yields $0=B\mathtt{ReLU}(-A^{-1} a) + b$ so $b=-B\mathtt{ReLU}(-A^{-1}a)$. Combined we have that $B^{-1}b=\mathtt{ReLU}(A^{-1}a)=-\mathtt{ReLU}(-A^{-1}a)$ so that $B^{-1}b$ must be zero and hence so must $b$. Finally, inserting $x=-2A^{-1}a$ yields $\mathtt{ReLU}(-a)=B\mathtt{ReLU}(-2A^{-1}a) + b$ so that $\mathtt{ReLU}(-a)=-b=0$ which combined with $\mathtt{ReLU}(a)=b=0$ gives $a=0$. This lemma shows that if ReLU commutes with affine actions, then the affine actions must in fact be linear and so Lemma 2.2 applies. This shows that Proposition 2.3 holds with affine layers as well. We would like to greatly thank the reviewer for prompting this generalization which we will include in the appendix. > Lemma 2.2 claims that the diagonal scaling matrix would be required to have (strictly) positive entries, however, non-negative entries, including zeros, seem sufficient. The resulting set of matrices does then no longer form a group since invertibility is lost, but the original group is contained as subset. It is correct that non-negative diagonal matrices commute with ReLU. The reason for excluding them in the lemma is that otherwise we would have to exclude them later when talking about group representations, for the reason the reviewer states - non-invertibility. > The "intertwiner group" should furthermore be renamed to "equivariance group" since intertwiners are by definition linear, which ReLU is not. This terminology comes from Godfrey et al. 2022. We agree with the reviewer that the naming is a bit unfortunate. We are unsure if the best course of action would be to add a footnote or to simply remove the sentence, and are happy to receive further feedback. > The activation matching technique should be briefly explained in the current submission instead of only pointing to related work. The paper is currently only mentioning trivial and regular representations, however, everything should apply to more general quotient representations as well, which should be briefly clarified. We thank the reviewer for these helpful suggestions. We would like to clarify that trivial and regular representations are the only ones discussed for the horizontal flipping case as any permutation representation of the 2 element cyclic group must consist of fixed elements and transpositions only. I.e. the permutation representations are direct sums of trivial and regular representations. --- Rebuttal Comment 1.1: Comment: Thanks for the detailed response. The rebuttal addresses all of my questions and I would like to express again that I vote for accepting the paper. Regarding the "intertwiner group" terminology I think that both of the proposed solutions are fine. I would personally choose "equivariance group" as it should be understood by everyone working on equivariant networks and does not require the additional footnote. Note that not every reader in equivariant DL may be familiar with representation theory, intertwiners or the terminology of Godfrey et al.
Summary: This work investigates the relationship between end-to-end equivariance of a network and layerwise equivariance. It theoretically investigates when we can guarantee that an equivariant network is layerwise equivariant, and also cases where layerwise equivariant is not guaranteed or is harmful. In the case of CNNs, the authors draw a connection between horizontal flip invariance and the permutation conjecture for linear mode connectivity. Strengths: 1. Good exposition in Sections 1 and 2 2. Appendices C and D have nice observations and are illustrative. I quite like the examples in C and C.1., which concretely show ways in which layerwise equivariance is not good enough when network capacity is low. 3. Neat simpler proof of the lemma from Godfrey et al. 2022 4. Nice connection drawn to the permutation conjecture that I would not have expected. 5. The trained VGGs are remarkably similar to GCNNs. Weaknesses: 1. The point about layerwise equivariance is covered in some prior works that are not cited. Much of appendix D.1. in particular is discussed in depth in [1]. Limitations of linear layerwise equivariance is discussed in [Finzi et al. 2021]. 2. As noted by the authors, the restriction to horizontal flips is restrictive for empirical evaluation, though to be fair horizontal flips consistently improve many vision systems. [Shakerinava et al. 2022] Structuring Representations Using Group Invariants. NeurIPS 2022. [Finzi et al. 2021] A Practical Method for Constructing Equivariant Multilayer Perceptrons for Arbitrary Matrix Groups. ICML 2021 Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: What do you mean on Page 5 when you say that the results for Godfrey et al. 2022 for other nonlinearities "also straightforwardly translate to the group equivariance case"? Minor: * Page 4: "and then $\tilde f$ is equivariant" should be "... layerwise equivariant" * Can you give a citation for the existence of dead neurons on page 4? * Page 6, several times you refer to the numerator of (4) when you mean the denominator. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: Good discussion of limitations in Sectino 1.2. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their useful suggestions and helpful review. > The point about layerwise equivariance is covered in some prior works that are not cited. Much of appendix D.1. in particular is discussed in depth in [1]. Limitations of linear layerwise equivariance is discussed in [Finzi et al. 2021]. Thank you for pointing out these overlooked references. We will of course incorporate them. > What do you mean on Page 5 when you say that the results for Godfrey et al. 2022 for other nonlinearities "also straightforwardly translate to the group equivariance case"? We apologize for the sloppy formulation which we will improve. What is meant is that Section 3.1/Table 1 of Godfrey et al. contains several results of the type (paraphrased) "For nonlinearity $\sigma$ if $\sigma(Ax)=B\sigma(x)$ for invertible matrices $A, B$ then $A$ and $B$ have the following specific forms...". This implies that the representations acting on the feature spaces in a layerwise equivariant network with activation function $\sigma$ must have matrices of the respective forms of $A$ and $B$. > Minor [...] Thanks! --- Rebuttal Comment 1.1: Comment: Thank you for the response! I have no further questions at the time, and will discuss with other reviewers.
Summary: This paper provides an investigation on whether equivariance of a trained deep neural network with ReLU activations implies that each of its learned layers are equivariant. The authors show that this should be true in some sense, i.e., some kind of group action must be present in the intermediate feature spaces (Line 141-143 and Appendix D.1), as the network should be able to always encode the group transformation of the input in some way to achieve overall equivariance. However, this applies to intermediate feature spaces where the clamping behavior of (ReLU) activations are involved; the authors argue that, when considering the learned linear layers, this is not always true (Example 2.1). Regarding ReLU activation, the authors show that if ReLU is G equivariant, the representations on the input and output must be scaled permutation representations (Lemma 2.2). From that, for two-layer networks with a strong assumption of invertible weight matrices, the authors show that overall equivariance implies layerwise equivariance where representations on intermediate features are scaled permutation representations (Proposition 2.3). Based on that, the authors investigate how the (requirement for) scaled permutation representations provides an implication of how horizontal flipping symmetry (related to the representation of S2 group) can be encoded in CNNs. The key intuition here is that permutation representation exactly appears in the characterization of GCNNs (Cohen et al., 2016), where the kernel constraint gives that an element of S2 group acts on the group convolution kernel through the joint action of spatial permutation representation (horizontal flipping here) and channel permutation representation (allocation). The authors further make a connection to the invariance of neural networks under permutation of neurons, or channels in case of CNNs due to spatial structure of convolutions, and the permutation conjecture (Entezari et al., 2022) that networks of the same type trained on the same data would lie in the same loss basin up to some permutation of neurons (channels). Then, the authors logically combine the requirement for permutation representations on ReLU for G equivariance, the spatial and channel permutation representation on group convolution filters, and the permutation conjecture, leading to the following conjecture: CNNs trained on a data distribution invariant to horizontal flips would be close to being GCNNs. The authors empirically test their conjecture using VGG and ResNet architectures on CIFAR10 and ImageNet, which supports the proposed conjecture. Cohen et al., Group Equivariant Convolutional Networks (2016) Entezari et al., The role of permutation invariance in linear mode connectivity of neural networks (2022) Strengths: S1. I think this is a solid work that establishes a creative combination of ideas and findings from multiple sub-areas of deep learning theory, and from that proposes some original theoretical findings that combines into a very interesting conjecture that is potentially impactful in the field of equivariant deep learning. S2. In addition to the above, the proposed conjecture is equipped with a proper empirical support involving multiple deep architectures and datasets, which is an important strength of this work. Weaknesses: W1. While reading the paper, I was quite confused about the implication of formulation in Appendix D.1. It seems the results, in particular Proposition D.4, explicitly proves that equivariance of a network implies layerwise equivariance, even under non-invertibility of individual layers. Because of this, I was confused when reading the discussion on Example 2.1 as well as Line 158 that overall equivariance might not lead to layerwise equivariance, as these seem like conflicting arguments. Am I missing something? W2. The discussion on identifying an equivalent equivariant network with layerwise equivariance (Line 132-136), while understandably leading to discussion in Appendix C, seems not critical in describing the main conjecture of the paper (please correct me if I am wrong). It might be better in terms of readability to contain the relevant discussion in some separated section. W3. In describing Eq. (3), it might be beneficial to explain how the transformation of filter relates to transformation of input and output (in context of equivariance) for readers not familiar with GCNN. W4. The description of why and how the REPAIR algorithm is used in Line 289-296 was hard to understand, I think it has room for improvement. Technical Quality: 4 excellent Clarity: 2 fair Questions for Authors: Q1. Reading Line 260-268, a question I had was to which extent the proposed conjecture depends on the particular nature of S2, i.e., how specifically it may generalize to other discrete groups such as 90-degree rotations. May I ask for an explanation on this? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 2 fair Contribution: 4 excellent Limitations: The authors have clarified the limitation of the work in Section 1.2. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their review and helpful comments. > While reading the paper, I was quite confused about the implication of formulation in Appendix D.1. It seems the results, in particular Proposition D.4, explicitly proves that equivariance of a network implies layerwise equivariance, even under non-invertibility of individual layers. Because of this, I was confused when reading the discussion on Example 2.1 as well as Line 158 that overall equivariance might not lead to layerwise equivariance, as these seem like conflicting arguments. Am I missing something? This is an excellent questions and something we have struggled with ourselves. The way to interpret it is that as linear functions between vector spaces, the layers need not be equivariant. But if we redefine the domains/codomains of the layers then they become equivariant functions with respect to specific nonlinear group actions. In particular, the framing in Appendix D does not rule out group actions in the middle of the network which are nonlinear. In the main part of the paper we discuss the case of linear group actions (=representations) acting on all feature spaces. > W2, W3, W4 We thank the reviewer for these helpful suggestions. > Reading Line 260-268, a question I had was to which extent the proposed conjecture depends on the particular nature of S2, i.e., how specifically it may generalize to other discrete groups such as 90-degree rotations. May I ask for an explanation on this? In the 90-degree rotation case, Entezari's conjecture plus the assumption that the data distribution is rotation invariant gives that when we rotate all filters of the initial CNN 90 degrees, there should be a permutation that aligns these rotated filters with the original ones. If this permutation is cyclic of order 4, then it is a permutation representation of the group of 90 degree rotations. Thus the CNN is a G-CNN w.r.t. this group. --- Rebuttal Comment 1.1: Title: Response to rebuttal Comment: Thank you for the detailed response. I have read through the manuscript again with the response in mind, and the narrative leading to ReLU and permutation representation is more clear now. I recommend the authors to revise Line 141-148 according to the rebuttal to prevent potential misunderstanding (like the one I had) regarding Proposition D.4. Now that my major concern is resolved, I have adjusted my score accordingly.
Summary: Exploring the symmetries of representations and parameters in neural networks is crucial. This paper provides several valuable contributions. First, the authors theoretically proved that for Relu-Networks equivariance implies layer-wise equivariance, but not vice versa. Second, inspired from the conjecture by [22], the authors discussed a weaker version that connects CNNs with GCNNs. Finally, quantitative experiments were performed and showed that a ReLU-network that has been trained to be equivariant will be layer-wise equivariant. Strengths: 1. This paper is well written. I enjoy the writing. While the concepts particularly those related to group representation are complicated and non-trivial, the authors did a great job in introducing the relevant definitions, examples and propositions in an elegant and compact way. The authors also provided necessary intuitive understanding, which greatly help readers to digest the motivation of how the investigation is performed. 2. The conclusion by Proposition 2.3 is interesting and valuable, even the proof is a direct result of Lemma 2.2 by [14]. I have checked [14] and find no similar conclusion. The authors also provided abundant theoretical understandings and derivations in the supplementary materials including the discussion of the invariant case. 3. The discussion of how GCNNs are related with the notion of Conjecture 3.1 [14] is insightful and inspiring. 4. The experimental evaluations are scrupulously carried out and are able to support the claims by the authors to a certain extent. Weaknesses: I have no major concern. There are still some questions below: 1. Proposition 2.3 shows that layer-wise equivariance is NOT a necessary condition of equivariance. But the experiments did show that CNN with equivariant training boils down to be layer-wise equivariant. So, why we have this observation and does it mean that the examples derived from Proposition 2.3 are just corner cases that rarely happen in practice? Moreover, how does Proposition 2.3 influence the design of GCNNs? Or particularly, what is the relationship between the equivariant constraint in Proposition 2.3 and the derived kernel constraint (Eq. 3) by [31,30]? I would expect the authors to explain more on these points? 2. Would the authors explain why they have the claims in Lines 151-155 that equivariance can hurt the performance? Why in Table 2, the unconstrained models obtained better performance than those constrained or equivariant ones? 3. Table 1 only defined the models trained with invariant losses, which indicates that the performance in Table 2 can only answer Q1? How about the experiments of models trained with horizontal flipping data for Q2? Is Table 3 mainly for Q2? Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: See the weakness part above. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: The authors have adequately discussed the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their good comments and questions which will help us improve the paper. > Proposition 2.3 shows that layer-wise equivariance is NOT a necessary condition of equivariance. But the experiments did show that CNN with equivariant training boils down to be layer-wise equivariant. So, why we have this observation and does it mean that the examples derived from Proposition 2.3 are just corner cases that rarely happen in practice? Let us clarify that Proposition 2.3 shows for simple networks that equivariance and layer-wise equivariance are equivalent. Example 2.1 however indeed shows that when the conditions in Proposition 2.3 (specifically, invertibility) do not hold, then a network may be equivariant without being layerwise equivariant. The fact that we don't observe the degeneracy from 2.1 in the experiments could be explained as follows. Note that if several input channels are set to zero by a weight matrix + ReLU, then their values do not matter for the network output. Thus even if they are not equivariant, we can permute them without changing the output of the network. So the permutation found by the weight matching procedure in the experiments might not be actually permuting these, later killed off, channels in an equivariant matter, meaning that the network is not strictly layerwise equivariant. But it is layerwise equivariant for the channels that matter. > Moreover, how does Proposition 2.3 influence the design of GCNNs? Or particularly, what is the relationship between the equivariant constraint in Proposition 2.3 and the derived kernel constraint (Eq. 3) by [31,30]? I would expect the authors to explain more on these points? This is an excellent question. While the kernel constraint says given certain group representations what the possible linear layers are, our results instead concern given a certain activation function which group representations are possible. So the results are complementary. > Would the authors explain why they have the claims in Lines 151-155 that equivariance can hurt the performance? This refers to the discussion in Appendix C, where we give specific examples. The main point is that an invariant/equivariant network must be equally good at detecting every pattern in all orientations. In practice given a limited network capacity, it may give better performance to be able to recognize more patterns in fewer orientations than fewer patterns in more orientations. > Why in Table 2, the unconstrained models obtained better performance than those constrained or equivariant ones? We do not know. However a possibility is that it relates to the mentioned discussion in Appendix C, i.e. that the invariant/equivariant networks do not have enough capacity to learn all patterns necessary to classify the data well. > Table 1 only defined the models trained with invariant losses, which indicates that the performance in Table 2 can only answer Q1? How about the experiments of models trained with horizontal flipping data for Q2? Is Table 3 mainly for Q2? All the networks in Table 1 in fact are trained with horizontal flipping data augmentation (see line 282). We will make this more explicit. We have now additionally performed experiments without flip augmentation, please refer to the global rebuttal. --- Rebuttal Comment 1.1: Title: Thank your for the reply Comment: I thank the authors for their efforst in addressing my concerns. There are some misunderstanding I have made before. But after checking the feedbacks and other reviewers' comments. I am sure that this is a solid paper and comes valuable for the publication in NeurIPS. I have no any question now.
Rebuttal 1: Rebuttal: We thank all reviewers for their well-written and very helpful reviews. Reviewer k5hy suggested an extra experiment which we have carried out and believe will be of interest to all reviewers. We train VGG-11 nets on CIFAR-10 *without* horizontal flipping data augmentation. These models have lower accuracy and lower invariance than the ones trained with flip augmentation. Most importantly, they have higher GCNN barrier. The GCNN barrier for the nets trained without flip augmentation is approximately as large as the barrier between two separate nets a la Entezari et al. Since the data still contains horizontal flipping symmetries we find it non-surprising that the obtained nets are still quite close to being GCNNs in this sense. Please find the relevant numbers in the table below: | Name | Accuracy | Invariance Error | Barrier | |----------------------------------------|------------------------------|------------------------------|------------------------------------------| | CNN (Table 2) | $0.901 \pm 2.1\cdot 10^{-3}$ | $0.282 \pm 1.8\cdot 10^{-2}$ | $4.00\cdot 10^{-2}\pm 4.9\cdot 10^{-3}$ | | CNN w/o horizontal flip augmentation (NEW) | $0.879 \pm 1.8\cdot 10^{-3}$ | $0.410 \pm 4.0\cdot 10^{-2}$ | $4.98\cdot 10^{-2}\pm 6.1\cdot 10^{-3}$ | | CNN merged with separate net (Table 4, appendix) | $0.901 \pm 2.1\cdot 10^{-3}$ | $0.282 \pm 1.8\cdot 10^{-2}$ | $5.08\cdot 10^{-2}\pm 5.7\cdot 10^{-3}$ | Also of general interest could be that reviewer ZqAK prompted a generalization of Lemma 2.2 and Proposition 2.3 to affine layers. We refer to the individual rebuttal for this - it is the first question answered there.
NeurIPS_2023_submissions_huggingface
2,023
Summary: This paper shows that CNNs will be close to G-CNNs if they are trained to be equivariant. In addition, they also provide some theoretical analysis and conjectures regarding the layerwise equivariant of ReLU-networks. Strengths: 1. They show that equivariance implies layerwise equivariance with a scaled permutation representation acting on the feature maps (Proposition 2.3.). 2. They propose a new conjecture 'Most SGD CNN-solutions on image data with a distribution that is invariant to horizontal flips of the images will be close to GCNNs.' (Conjecture 3.2.) They proposed a new measure for closeness to being a GCNN. 3. Experiments on CIFAR-10 and ImageNet support Conjecture 3.2. Weaknesses: 1. In general, the paper may have limited or vague benefits for applications. First, the work focuses on ReLu-networks in the theoretical analysis, ignoring normalization layers. Modern neural network designs heavily rely on different normalization layers. In addition, in the ImageNet experiments, ResNet-50 uses batch normalization, which is not aligned with the theoretical analysis. 2. The paper did not explain what are the insights for network designs if ReLu CNNs are layerwise equivariance. In practice, it even hurts the performance of the model if the invariance loss is applied. Then, the generalization of the model might contradict the actual layerwise equivariance of the model. And the invariance loss adds an extra restriction for learning. 3. The evidence of Q1 is much weaker than Q2 in section 4. The G-CNN Barrier of 'CNN' in table 2 is 20$\times$ larger than 'CNN + inv-loss.' I wonder if it is still valid to say it is close to G-CNN. Maybe a baseline without horizontal data augmentation can better justify this. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. What is 'ppt' defined in section 4.2? I did not find explanations regarding it. 2. Could you give more explanations regarding the last sentence of the conclusion 'If a negative GCNN barrier is achievable ...... enable weight space ensembling.'? How 'negative GCNN barrier' is related to 'weight space test time data augmentation', etc? Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: Limitations are given. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the helpful review. We aim to clarify the writing in the paper in accordance with our replies to the reviewers comments below. > In general, the paper may have limited or vague benefits for applications. It is true that there might not be immediate benefits, but this is the case for much theoretical work on neural networks. See also the clarification of "weight space test time data augmentation" further down. > First, the work focuses on ReLu-networks in the theoretical analysis, ignoring normalization layers. Modern neural network designs heavily rely on different normalization layers. In addition, in the ImageNet experiments, ResNet-50 uses batch normalization, which is not aligned with the theoretical analysis. Batch normalization is not included in the theory section, however as the reviewer notes the experiments suggest that layerwise equivariance is still compatible with batch normalization. Note that our result says that using ReLU implies that if we have layerwise equivariance then the representations acting on the features are permutation representations. Batch normalization is equivariant w.r.t. such representations if the batch statistics are computed jointly for channels in the same cycle in the cyclic decomposition of the representation [1]. The fact that we have positive experimental results indicates that statistics for such channels are approximately the same. > In practice, it even hurts the performance of the model if the invariance loss is applied. Then, the generalization of the model might contradict the actual layerwise equivariance of the model. And the invariance loss adds an extra restriction for learning. The invariance loss is not added to boost performance, but rather to test the theory of whether an invariant CNN is layerwise equivariant. Surprisingly, the setting with invariance loss added after 20% of the training epochs did improve performance of simple VGG nets. This is a result that we have not seen in prior literature. > The evidence of Q1 is much weaker than Q2 in section 4. The G-CNN Barrier of 'CNN' in table 2 is 20 larger than 'CNN + inv-loss.' I wonder if it is still valid to say it is close to G-CNN. Maybe a baseline without horizontal data augmentation can better justify this. The closeness to a G-CNN should be interpreted in relation to Entezari's conjecture (3.1). Our experiments suggest that a CNN trained with data augmentation is closer to being a G-CNN than it is to another independently trained CNN. It is indeed the case that it is not super close to being a G-CNN, but it also is not invariant to horizontal flips so this does not contradict our theory. In any case, a baseline w/o horizontal flipping augmentation is a great suggestion and we have carried it out. Please refer to the global rebuttal. > What is 'ppt' defined in section 4.2? I did not find explanations regarding it. It is an abbreviation for percentage point. We will write it out in full to avoid confusion. > Could you give more explanations regarding the last sentence of the conclusion 'If a negative GCNN barrier is achievable ...... enable weight space ensembling.'? How 'negative GCNN barrier' is related to 'weight space test time data augmentation', etc? One of the motivations for studying the Entezari conjecture has been that if we can find permutations relating two networks, then we can average their weights so that we obtain a so called weight space ensemble of them. The goal being to obtain the benefits of ensembling while still working with a single network. If we can find permutations relating a network A to its horizontally flipped self B, then we can perform such weight space ensembling between A and B. Note that the output of B will be the same (up to border/stride effects) as what we would get if we insert a horizontally flipped image into A. Thus ensembling A and B is the same as doing test time augmentation with horizontal flipping, and weight space ensembling of A and B could be called weight space test time augmentation. If the GCNN barrier is negative it means that we get improved performance by doing this. Note that a negative barrier has not been obtained for the Entezari setting of two different networks yet and our results suggest that it should be easier to obtain for our setting of a network and its flipped self. [1] Weiler & Cesa, General E(2)-Equivariant Steerable CNNs, NeurIPS, 2019 --- Rebuttal Comment 1.1: Comment: I want to thank the authors for their detailed responses. Most of my concerns are addressed and additional explanations are provided. The 'weight space test time data augmentation' may provide an interesting direction for future works, and it seems promising. As a result, I changed my score to positive.
null
null
null
null
null
null
When Does Optimizing a Proper Loss Yield Calibration?
Accept (spotlight)
Summary: This paper elucidates the relationship between proper losses and calibration by providing the minimal necessary and sufficient condition for proper losses to induce calibrated models. The condition, local optimality, delineates the concept that no post-processing functions can improve a proper loss anymore. This is related to a specific measure of calibration called smooth calibration error, which is a kind of correlation between miscalibration and model predictions. The smooth calibration error is used here because it does not suffer from a discontinuous nature, unlike the popular expected calibration error, and it naturally emerges from the Bregman divergence structure. By leveraging the Bregman divergence structure of proper losses and convex analysis, a theoretical connection between the smooth calibration error and the post-processing gap (the quantitative version of the local optimality condition) is established, which indicates that minimizing the post-processing gap is necessary and sufficient to achieve sufficiently small calibration error. As an implication of the theory, the authors point out a potential connection between the implicit bias regularization of DNNs and the post-processing gap so that sufficiently over-parametrized networks may achieve a small post-processing gap, leading to the current well-calibrated neural networks. Overall, this paper pushes the understanding of proper losses and calibration toward the context of modern neural network regimes. Strengths: - A new connection between proper losses and calibration: Though the two concepts seek similar goals, they have been studied independently, and the relationship has not been understood well. The main theorem of this paper draws the connection by establishing the upper and lower bounds of the post-processing gap (related to proper losses) by the smooth calibration error (related to calibration). As far as I know, this is the first attempt to connect the two concepts. The concept of the post-processing gap is well motivated by deep neural networks (regarding the post-process as the layer addition). - A transparent proof: The proof of the main result (Theorem E.8) gives us a nice picture of the relationship between the calibration error and proper losses. Specifically, the proof of the bounds mostly leverages the smoothness of a function $\\psi$. This proof is simple and gives us an insight that the structure of $\\psi$ essentially governs the connection. - The clarity of the presentation: Though the concepts introduced in this paper are rather dense, the authors did a nice job of presenting them gradually from a conceptual level to a technical level, which helps readers who may not be familiar with those concepts understand them easily. Weaknesses: - Potential gaps between the theory and Guo et al. (2017): The authors argue that "the previous generation of image classifiers were poorly calibrated [Guo et al., 2017]" and "state-of-the-art DNNs are often well-calibrated: because their test loss cannot be improved much by adding a few more layers." However, I feel that the architectures used by Guo et al. (2017) are already very deep. For example, in their pilot study (Figure 1), they chose to use a 110-layer ResNet, which would be sufficiently over-parametrized. Since Guo et al. (2017) argued that DNNs are poorly calibrated even with that number of layers, I would like to see the authors' discussion on this line. - Missing key references: Some results and claims in the paper are substantially related to previous works that are not mentioned in the paper. It would be great to give credit to them. For example, the dual mapping of the form $\\mathsf{dual}(v) := \\ell(0,v) - \\ell(1,v)$ (l. 299) is known in Eq. (47) of Buja et al. (2005); The composition of a proper loss $\\ell$ and the dual mapping $\\psi$ is known as composite losses, coined in Reid and Williamson (2010). The dual loss form in Eq. (6) and Definition 4.3 is known as the Fenchel-Young losses, and the expression was shown in Eq. (14) of Blondel et al. (2020) and Eq. (11) of Duchi et al. (2018); The convex conjugate structure $\\nabla\\psi(\\mathsf{dual}(v)) = v$ (l. 329) was pointed out in Figure 1 of Bao and Sugiyama (2021); Some parts of Lemma E.4 are closely related to Proposition 2 of Blondel et al. (2020). - Restriction to binary classification: The attention of this whole paper is restricted to binary classification, as mentioned in the conclusion. This is far more restrictive in practical situations. But I don't think this limitation undermines the impact of this paper. **References** - Reid and Williamson (2010) "Composite Binary Losses" - Buja et al. (2005) "Loss Functions for Binary Class Probability Estimation and Classification: Structure and Applications" - Blondel et al. (2020) "Learning with Fenchel-Young Losses" - Duchi et al. (2018) "Multiclass Classification, Information, Divergence, and Surrogate Risk" - Bao and Sugiyama (2021) "Fenchel-Young Losses with Skewed Entropies for Class-posterior Probability Estimation' Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: Some minor comments: - (Question) At l.261, I don't get the point of "within the restricted class where the logit is a linear combination of the features," specifically, what "the features" mean in the current context. Could you clarify? - (Question) Why do you present the non-generalized dual calibration error in Section 4, unlike the generalized one in the appendix? I do not get the reason why it is useful to confine the generalization $w(x,g(x))$ to $\\eta(x)$. - (Typo) At l.75, "depends on on the architecture" -> "depends on the architecture" - (Typo) At l.204, a parenthesis is missing for $\\eta(f(x))$. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 4 excellent Contribution: 4 excellent Limitations: The authors discuss the limitations of this work at the end of the paper and point out that there is room to investigate the connection among calibration, DNN architectures, and optimization. Most of the work is theoretical, and few negative societal concerns would apply. Flag For Ethics Review: ['No ethics review needed.'] Rating: 9: Very Strong Accept: Technically flawless paper with groundbreaking impact on at least one area of AI/ML and excellent impact on multiple areas of AI/ML, with flawless evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the positive feedback! Here is our response to the individual comments and questions in your review: **Re. relation to Guo et al. (2017).** Thanks for the great question. The issue is subtle. We discussed it in our paper (see Line 119) and would like to further elaborate below. It is simultaneously true that: 1. The DNNs used in [Guo et. al] are "deep enough" 2. The DNNs used in [Guo et. al] can be substantially improved (w.r.t test loss), by simple post-processing. The resolution is that the DNNs of [Guo et al.] were optimized for test *Error* (not test loss), and thus were "overtrained" – they trained these DNNs for many epochs, enough to nearly interpolate the train set. This causes a high test loss, which they notice in Figure 3 of [Guo et al.] The authors themselves recognize this discrepancy between error and loss: "neural networks can overfit to NLL without overfitting to the 0/1 loss." In fact, if one trains the same DNNs as Guo et al, but optimizing for test *loss* (and thus, early-stopping before the loss overfits), then the resulting networks are very well-calibrated, consistent with our theory. (We have confirmed this experimentally, and given your question, we will consider adding this experiment to the full version). Our comment about "state-of-the-art DNNs are often well-calibrated" holds because, these days, DNNs are trained in the "very large data" regime, sometimes even for just 1 epoch — so, they have a small loss-generalization gap, and don't overfit their test loss. (e.g. the GPT-3 tech report which includes epoch details: https://arxiv.org/abs/2005.14165 , and Figure 16 of the GPT scaling laws paper [https://arxiv.org/abs/2001.08361], which shows plots of test vs. train loss in both early-stopped and overfitting regimes). To rephrase, the essential difference between SOTA networks in 2017 and 2023 is: In 2017, classification networks were trained for many epochs, and thus had very high test loss (despite small test error). But in 2023, we often have so much data that we cannot afford many epochs, and thus the test loss does not overfit. We can add this more extended discussion to the final version of our paper. **Re. missing references.** We are grateful that you point us to these references! They are very relevant and we will make sure to cite them properly in our final version. We stated in our submission that the connection between proper losses and conjugate pairs of convex functions is well known before our work. We are eager to include a more comprehensive discussion about where each specific notion/result has appeared in the literature. Your suggested references are extremely helpful for us to this end. **Re. beyond binary classification.** See our response to reviewer ooFk re. multiclass settings. **Re. features in logistic regression.** We are referring to the standard logistic regression setting where each data point (x,y) has a feature vector $x\\in \\mathbb R^d$ and a binary label $y\\in \\{0,1\\}$. Given a number of such data points, our goal is to learn a linear/affine function $g(x) = \\langle a,x\\rangle + b$ that maps a feature vector x to a logit g(x), after which we apply the sigmoid transformation $\\sigma$ to get the final prediction $\\sigma(g(x))\in [0,1]$. Here the logit g(x) is restricted to the form $g(x) = \\langle a,x\\rangle + b$. That is, the logit is a linear combination of the coordinates of x plus a bias b. Each coordinate of the feature vector x is usually called a feature, so the logit is a linear combination of the features plus a bias. We will add “plus a bias” to our paper to be more accurate. **Re. presentation of generalized dual calibration error.** We choose to defer our results about generalized dual calibration error to the appendix because 1) we have limited space, and 2) we do not want our core calibration results to be diluted by the more general results. In our final version, we will add text to our main paper mentioning and pointing to the more general results in the appendices so that interested readers will not miss them. --- Rebuttal Comment 1.1: Title: Response Comment: I thank the authors for addressing my concerns. Specifically, the clarification on the relation to Guo et al. (2017) was helpful. It seems to become less misleading to add such a discussion in the next version to clarify the difference of the practice of neural networks training during these few years.
Summary: This work introduces a local optimality condition for models (with respect to proper losses) based on (additive) post-processing with a 1-Lipschitz function that is necessary and sufficient for calibration. The authors also connect their results to the idea of implicit regularization, showing that structural risk minimization with an appropriate class of complexity measures achieves good calibration. Strengths: ## Originality The introduced notion of post-processing error and its connection to (smooth) calibration is novel and useful, as it proves plausible explanations for why current SOTA deep learning models are better calibrated when compared to previous generations. Additionally, the introduced local optimality condition based on post-processing is distinct from conditions seen in optimization, but remains non-trivial since the class of post-processing functions is restricted to be 1-Lipschitz. ## Quality The paper is technically sound and polished; the definitions are well-motivated and simpler results are accompanied with proofs (or at least intuitive justification) in the main body of the paper. ## Clarity The theory is easy to follow, and the authors have done a good job of scaling the complexity of the results as the paper progresses (introducing simple examples and high-level ideas first, and generalizing later). The authors also qualitatively connect their results to recent empirical phenomena in deep learning, which is helpful for grounding the theory. ## Significance Calibration is an increasingly important problem, and the paper provides insights into how to think about the relationship between calibration and post-processing procedures. Weaknesses: - *Experiments:* While I understand this is a theory paper, I do think it would be nice to have some experimental analysis of whether the local optimality condition is satisfied for modern architectures (i.e. simply adding an extra layer as suggested in the paper and analyzing calibration performance). - *Addressing Popular Post-Processing Methods:* If I understand the discussion around Definition 2.2 correctly, the post-processing operation defined does not include temperature-scaling-type techniques, since $f(x)$ includes the sigmoid operation and one would have to apply the re-scaling to the logits. If this is not the case, some clarification in this section linking the post-processing definition back to the standard post-processing approaches would be useful. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: My only question is related to my discussion under weaknesses; namely how temperature scaling type methods would fit in with Definition 2.2. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: The authors appropriately address limitations; namely, they acknowledge that their results do not explain _why_ the optimization process for current models leads to calibrated predictors. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the positive feedback! Here is our response to the individual comments and questions in your review: **Re. temperature scaling.** Typically temperature scaling is applied to the logits when we optimize the cross entropy loss. Thus temperature scaling is more closely related to Definition 2.5 (which uses the cross-entropy loss) instead of Definition 2.2 (which uses the squared loss). In Definition 2.5 we apply a Lipschitz post-processing $\\kappa$ to the **logit** g(x) (see text around Line 225). Temperature scaling, i.e., dividing the logit g(x) by a temperature parameter T > 0, is a special case of such Lipschitz post-processings as long as T is not too close to zero (we need this assumption on T to ensure Lipschitzness). Definition 2.5 also considers many other Lipschitz post-processings that cannot be expressed as temperature scaling. This is important because our logistic regression example in Appendix B shows that a small post-processing gap w.r.t temperature scalings alone does NOT imply a small calibration error. We will include this discussion about temperature scaling in our final version. **Re. experiments.** See our response to Reviewer 6rky re. experiments. --- Rebuttal Comment 1.1: Comment: Ah, thank you - that makes sense. I have no further questions.
Summary: The paper considers calibration in binary classification when training was performed with proper losses. The authors showed that the post-processing gap of a predictor, which is a maximum improvement of the loss given any 1-Lipschitz update (calibration) function, could be both lower and upper bounded by a simple expression of smooth calibration error. Moreover, the authors proved that the constants used in upper and lower bounds are optimal. Strengths: The paper describes the connection between proper loss landscapes and calibration error improvement, which is an important and quite original problem under the current formulation. The paper has good positioning among other papers in the field, with clear references to many studies and a detailed indication of what was their contribution. The clarity of the paper is good. Weaknesses: The link with potential practical application is missing (or rather not convincing). Adding small-scale experiments to support theoretical results would be beneficial, e.g. to check whether the pre-requisites of the theoretical results are (approximately) fulfilled in practice. The discussion and explanation of theoretical results could be improved. The current impression is that either theoretical results are not that impressive (and based mostly on properties of Bregman divergence) or that the potential impact of derived theory is oversold. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. The derived results are valid in binary classification settings. Will it be possible, and what are the possible limitations, to obtain similar results in multiclass settings? 2. Could it be said that all theoretical derivations are based on properties of Bregman divergences but in a new setting connected with calibration errors? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The limitations have been covered at the end of the paper. No need to discuss societal impact in this paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their thoughtful feedback, and for recognizing our work on an "important and quite original problem." We are glad the reviewer rates our soundness, presentation, and contribution as "good." We are thus unsure why the overall score is a borderline reject, but we have included our response to the individual comments and questions below. If the reviewer is satisfied with our responses, and believes this paper should appear in NeurIPS, we ask that they consider increasing their score to an Accept. **Re. practical applications.** Our theory can be viewed as a bridge connecting calibration and proper loss minimization. Calibration is a desired property in numerous practical applications, but our understanding of how to achieve calibration is generally not as profound as how to perform loss minimization. Our theory allows people to apply the machinery of loss minimization to reason about calibration, and we hope that our theory will inspire better principled approaches to calibration. Here is an immediate impact of our paper to practical application. Practitioners (using sklearn for instance) are currently encouraged to believe that solely by minimizing proper loss within a restricted family of functions, they will get calibrated predictors (see the references in our paper). We show convincingly that in this generality, the claim is false. Moreover, we identify a sufficient and necessary local optimality condition for proper loss minimization that guarantees calibration. By being aware of our corrected connection between loss minimization and calibration, practitioners can avoid false confidence in a model’s calibration and eventually build more calibrated models. **Re. multiclass settings.** Our results can be extended to the multiclass setting, but such an extension is not entirely straightforward. Even the definition of calibration in the multiclass setting is subtle. In the multiclass setting, the prediction f(x) becomes a vector that assigns a probability to each class. One way to define calibration is to condition on the entire vector f(x) (canonical calibration), whereas other ways may involve conditioning on certain coordinates of f(x) (e.g. confidence calibration conditions on the maximum coordinate). How to robustly and efficiently measure the distance to each type of calibration in the multiclass setting adds another layer of complexity, and the recent work [Błasiok et al., 2023] only answers this question for binary labels. Our theory is more easily extended to canonical calibration than confidence calibration, but confidence calibration is widely used in practice. We thus leave the important but challenging question of building a complete extension of our theory in the multiclass setting to future work. We believe this is more conducive to the community than to include in the current paper an incomplete theory for the multiclass setting that may risk being misleading. **Re. properties of Bregman divergence.** Yes, as we mention in our submission, a lot of our proof techniques are closely related to properties of the Bregman divergence and the theory of convex conjugates. A major part of our contribution is formulating the right theorems to prove, which is as important as proving those theorems. We believe that by presenting our proof techniques in such a way that they appear relatively simple in retrospect, we make our paper more easily digestible and thus potentially more impactful. **Re. experiments.** See our response to Reviewer 6rky re. experiments. --- Rebuttal Comment 1.1: Title: Re: rebuttal Comment: Thanks for the clarifications! **Re: Re: practical applications.** You have written in your response that _"Practitioners (using sklearn for instance) are currently encouraged to believe that solely by minimizing proper loss within a restricted family of functions, they will get calibrated predictors (see the references in our paper)"_. I am not convinced that it is true. Your work refers to a paper published in 2011 about sklearn documentation. Back then, there might indeed have been such a belief about proper losses, given that no large NNs models existed. The up-to-date sklearn documentation about calibration does not seem to contain such encouragement. In my opinion, it is common knowledge that models tend to be uncalibrated, unless post-hoc calibration is used. **Re: Re: experiments.** Please comment on the two following points and questions. 1. Experimental evidence you are referring to is mainly about models trained with cross-entropy. However, your theoretical results are valid for all proper losses. Could we expect that new large NNs could be trained basically with any proper loss and have a nearly perfect calibration, given, for example, a slightly larger dataset/architecture, than existing models trained with cross-entropy have? 2. You claimed that your theoretical results provide an explanation of calibration differences between an old and new generation of NN models (due to a much larger dataset and more complex architecture). Those calibration differences are the main experimental evidence of your theory in the paper. However, it could probably be explained in a simpler way: proper losses should have well-calibrated results on a train set. Once train set is big enough (as for new generation of NN models), we could expect that train set empirical distribution is very close to ground truth population distribution, so the results on the test set (generated from this unknown population distribution) would be well-calibrated. Could you comment on this reasoning, and do you agree with it? Could you suggest other existing experimental evidence that supports your theoretical results as opposed to the above simpler explanation? --- Reply to Comment 1.1.1: Comment: Thank you for replying! **Re: practical applications.** We believe our quoted statement is well-justified for the following reasons: First, our statement is not just about neural networks, it is about minimizing proper losses *within a restricted family of functions.* Logistic regression is an instance of such restricted minimization, and many sources claim logistic regression is well-calibrated primarily because it minimizes a proper loss. As mentioned in our paper, the sklearn documentation as of August 16 2023 says "LogisticRegression returns well calibrated predictions by default as it has a canonical link function for its loss, i.e. the logit-link for the Log loss" (from the url we cited: https://scikit-learn.org/stable/modules/calibration.html). **Our 2011 citation is for the Scikit-learn package itself, and we also cited the up-to-date documentation url.** The wikipedia page on Platt scaling (as of August 16 2023) also claims that logistic regression gives well-calibrated models: “(Platt scaling) has less of an effect with **well-calibrated models such as logistic regression**, …” (https://en.wikipedia.org/wiki/Platt_scaling). However, our analysis demonstrates that this is not true: logistic regression can be severely mis-calibrated, essentially because it is not minimizing over a sufficiently rich family of functions (see lines 42-50, and Appendix B of our paper). Logistic regression remains an extremely popular method among practitioners of machine learning and data science in the real world, even in this age of "large NNs", and thus we believe our statement on practical relevance is justified. Finally, regarding neural networks, we do not agree with the claim that "it is common knowledge that models tend to be uncalibrated, unless post-hoc calibration is used." Specifically, because it is not true: there are now many documented instances of modern neural networks being extremely well-calibrated out-of-the box, as we cited in our paper. These citations include, for example, the GPT-4 tech report (Figure 8, left, in: https://arxiv.org/abs/2303.08774). **Re: experiments.** 1. Yes, our theory does predict that minimizing the test loss for essentially any proper loss over a rich enough model family will yield a calibrated model. We observed this experimentally in an ongoing and upcoming work. 2. Regarding the potential simpler explanation: we agree with the logic of the explanation (see our response to Reviewer CHsP) and the explanation was proposed by Carrell et al., 2022 (cited in our paper). However, the premise that “proper losses should have well-calibrated results on a train set” is non-trivial to establish and it is a motivation of our work. This premise is not true in general: logistic regression can be poorly calibrated on its train set, despite minimizing a proper loss. It turns out that sufficiently large neural networks *are* in fact well-calibrated on their train sets, as observed experimentally by Carrell et al., 2022, and these observations were inspiration for our theoretical work.
Summary: The paper provides a novel characterization of calibration as local optimality of the predictor w.r.t the global loss under post-processing of the prediction through a class of functions. The paper proves that any predictor satisfying such a condition is smoothly calibrated in the sense of [Kakade and Foster, 2008, Błasiok et al., 2023] and vice-versa. The paper further provides arguments for why such a condition should be satisfied by Deep Neural Networks (DNNs). Strengths: - The paper is well written, with a clear structure and ideas being developed in a coherent and logical sequence. - The topic is of large importance in the present context of research in machine learning and has numerous implications for practical applications. - The authors contribute a novel perspective on the 'implicit' calibration of machine learning models without the use of algorithms designed specifically for calibration, an aspect that is not extensively explored in current literature. - The explicit characterization of calibration in terms of local optimality under post-processing transformations, while implicit in earlier works, appears to be novel. This equivalence could be useful for further theoretical analysis of calibration, especially for deep neural networks. - The proposed framework is general and does not rely on specific choices of model's architecture or data-distribution. - The paper provides partial explanations for the observed calibration of deep neural networks trained on large training datasets. Weaknesses: - **Limited technical contributions**: Claim 2.1 itself directly follows from the definition of perfect calibration. The main theoretical results in the paper are generalized formulations of Claim 2.1 to smooth calibration, lipschitz-post processing functions, and general proper losses. While these generalizations themselves are novel, their proofs involve straightforward algebra and convex analysis. Therefore, the technical and mathematical contributions of the paper are limited. The results can be strengthened from examples of non-trivial results that can be derived using the local-optimality based characterization of calibration. For deep neural networks, the present results are only suggestive and would benefit from additional details and concrete results. - **Missing references**: - On double-descent in uncertainty quantification in overparametrized models: Lucas Clarte, Bruno Loureiro, Florent Krzakala, Lenka Zdeborova, Proceedings of The 26th International Conference on Artificial Intelligence and Statistics, PMLR 206:7089-7125, 2023. - Theoretical characterization of uncertainty in high-dimensional linear classification, Lucas Clarté, Bruno Loureiro, Florent Krzakala, and Lenka Zdeborová, Machine Learning: Science and Technology, Volume 4, Number 2 The above papers analyze calibration for empirical risk minimization and like the present paper, also highlight the role played by regularization. - **Dataset size and overparameterization**: The paper does not address aspects related to generalization and the effect of the training dataset size. While modern training setups utlize large training dataset sizes and one-pass SGD, their behavior in the proportional regimes of high-dimensional inputs and parameters is non-trivial and not equivalent to the minimization of population loss. For instance, the above papers establish a double-descent like phenomenon for calibration for varying levels of overparametrization. - **Experiments**: In light of the limited technical contributions, the paper could benefit from experimental justification of the validity of the local optimality condition for deep neural networks in realistic training setups. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - What are some complexity measures satisfying the condition in Claim 4.8? - What are the limitations of the definition of smooth calibration error used in the paper, especially in the context of deep neural networks? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 4 excellent Limitations: Yes, the limitations have been adequately addressed. The work is theoretical in nature and does not have direct societal impacts. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the comprehensive review! Here is our response to the individual comments and questions in your review: **Re. technical contributions.** A main contribution of our work is formulating the right generalization of Claim 2.1 where 1) the notion of calibration is meaningful, and at the same time 2) the post-processings can be naturally implemented in deep neural networks that are not explicitly trained for calibration. We think these contributions should be weighed more than the technical difficulty of the proofs. Indeed, we think that the reason that such statements are not found in previous literature is because the right formulation is fairly subtle – for example, until recently it was not clear how to measure miscalibration in a reasonable way (e.g. ECE is not ideal as it's discontinuous; more in our response below re. the smooth calibration error). As an “example of non-trivial results that can be derived using the local-optimality based characterization of calibration”, our Claim 4.8 identifies the types of regularization that make structural risk minimization (SRM) produce well-calibrated predictors. A key step for establishing Claim 4.8 is using our theory to reduce the problem to determining what types of regularization make the solution to SRM satisfy the local optimality condition. In particular, our Claim 4.8 reveals that regularization plays a different role in *calibration* than it does for *generalization* – it is possible to have regularizers that work for both aspects, but it is not at all necessary. **Re. missing references and the generalization aspect.** Thank you for pointing us to these papers! In our final version, we will make sure to cite them and include a refined discussion about generalization based on them. As we mention in our submission, generalization concerns are out-of-scope for our paper – we consider only optimization aspects, by considering all quantities on the population distribution. Both generalization and optimization aspects are ultimately important to consider, but our paper focuses on only the latter. **Re. experiments.** Many existing experiments in the literature have verified that many neural networks trained via loss minimization exhibit good calibration performance. We mentioned many such examples in our submission. According to our theory, these neural networks must also satisfy the local optimality condition. That is, neural networks satisfying the local optimality condition in realistic training setups is a rather general phenomenon that has already been experimentally observed. Our contribution is drawing the connection between calibration and local optimality which gives an explanation for why previous experiments observe good calibration in models trained via loss minimization alone. **Re. complexity measures that satisfy the condition in Claim 4.8.** The size of a neural network, when multiplied by a suitable constant, satisfies the condition. This is because by the universality of neural networks, a 1-Lipschitz post-processing can be (approximately) implemented by adding a constant number of neurons to a neural network. **Re. limitations of smooth calibration error.** Recently Błasiok et al. [2023] showed that the smooth calibration error is a *consistent calibration measure*. Specifically, they showed that the smooth calibration error differs by at most a constant factor from the Wasserstein distance to perfect calibration. By definition, all consistent calibration measures are polynomially related to each other, so all our results about the smooth calibration error also hold for any consistent calibration measure, up to modifying constants in our inequalities. That is, by building our theory using the smooth calibration error, our results automatically apply to the kernel calibration error with the Laplace kernel, the interval calibration error (*modified* binned ECE with the right bin width and randomly shifted bins), and the distance to calibration itself. In contrast, the popularly used ECE and binned ECE are not consistent calibration measures and are not in general polynomially related to the distance to calibration. Błasiok et al. [2023] also showed efficient algorithms for estimating the smooth calibration error. Given the consistency, efficiency, and continuity of the smooth calibration error, perhaps a downside of it is that it is not currently used as often as the binned ECE, but this may change as people become more aware of the advantages of using a consistent calibration measure. **In conclusion:** If the reviewer is satisfied with our responses, and believes this paper should appear in NeurIPS, we ask that they consider increasing their score to an Accept. --- Rebuttal Comment 1.1: Comment: I thank the authors for providing clear and comprehensive responses. I've raised my rating on the assumption that the authors will incorporate discussions about the above points and missing references into the revised version of the paper.
null
NeurIPS_2023_submissions_huggingface
2,023
Summary: This paper seeks to explore and formalize the relationship between minimizing proper loss and calibration in machine learning models, particularly deep neural networks (DNNs). It presents a local optimality condition that is necessary and sufficient to ensure model calibration. The work discusses the implications of these findings on the calibration properties of modern DNNs and presents algorithms that can guarantee calibration. The paper also contrasts the differences in calibration between current and previous generation models through the lens of generalization. Strengths: NA Weaknesses: NA Technical Quality: 3 good Clarity: 3 good Questions for Authors: NA Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: NA Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the positive feedback!
null
null
null
null
null
null
Closing the Computational-Statistical Gap in Best Arm Identification for Combinatorial Semi-bandits
Accept (poster)
Summary: The paper presents Perturbed Frank-Wolfe Sampling (P-FWS), an algorithm for the best arm identification problem in combinatorial semi-bandits in the fixed confidence setting. The algorithm achieves instance-specific minimal sample complexity in the high confidence regime and polynomial sample complexity guarantees in the moderate confidence regime. The authors show that P-FWS closes the computational-statistical gap in best arm identification in combinatorial semi-bandits. They describe the design of P-FWS, which starts from the optimization problem defining the information-theoretical and instance-specific sample complexity lower bound. P-FWS solves this problem in an online manner using the Frank-Wolfe algorithm with computationally efficient successive updates. Strengths: P-FWS is a new algorithm that addresses the best arm identification problem in combinatorial semi-bandits and achieves optimal sample complexity in both high and moderate confidence regimes. This makes it a significant contribution to the field. The paper highlights the closure of the computational-statistical gap in best arm identification in combinatorial semi-bandits. This is an important result, as it demonstrates the algorithm's efficiency in terms of both computational complexity and statistical performance. Weaknesses: The paper primarily focuses on theoretical analysis and does not include empirical experiments or evaluations on real-world datasets. While the theoretical guarantees are valuable, empirical results could provide further insights into the algorithm's practical performance and generalizability. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: How do you think your work could be extended beyond the LM Oracle ? For instance, in https://arxiv.org/abs/2302.11182, there are several combinatorial problems that do not fit your setting. I think it should be good to add this reference (and perhaps other) and mention (as future work or limitation) that more general combinatorial problems exist, and that your setting is restricted. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: The paper assumes the availability of a computationally efficient Linear Maximization (LM) Oracle, which can identify the best action given knowledge of the parameter µ. While this assumption may hold for some combinatorial sets of actions, it may not be applicable in all scenarios. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Many thanks for your careful review and positive feedback. About experiments. In this paper, we wanted to highlight its strong theoretical contributions: for combinatorial bandits, our algorithm P-FWS is the first polynomial-time algorithm that is asymptotically optimal in the high confidence regime. It also has a sample complexity polynomial in $K$ in the moderate confidence regime. We implemented our algorithm P-FWS. But we found it difficult to compare its performance to that of other algorithms, for these algorithms are computationally too challenging (it takes too much time to run). Here is an illustrative example. We compare P-FWS to a simplified version of CombGame [36]. The experiments are performed on a Macbook (Apple M1) with 8 cores and 16GB memory. We made 100 runs to compute the expected sample complexity of the algorithms. * On a graph with $K=20$ edges and a decision set consisting of $|X|=21025$ spanning trees and confidence $\delta=0.1$, P-FWS has an expected sample complexity $\tau$ of 1139 samples while CombGame with OFW has an expected sample complexity of 1325 samples but already takes 21 minutes for each run. * Now on a graph of $K=25$ edges and $|X|=0.3$ million spanning trees and confidence $\delta=0.1$, P-FWS expected sample complexity is 2000 samples (and in average, each run takes 17 hours) while any run for CombGame fails to finish within 2 days. Note that the original CombGame algorithm could not be implemented – it takes way too much time to finish. We had to simplify it. More precisely, we use our MCP subroutine to perform two components of CombGame, namely the best-response player and the stopping rule. Even with this simplification, CombGame rapidly becomes computationally intractable as the size of the decision set increases. About your comment on problems where the LM oracle cannot be implemented in polynomial time. Thanks for this remark and the reference. We will add and discuss it! This issue remains beyond the scope of our paper. It would be interesting to investigate whether the techniques of (Yang, Feidiao, et al.) could be used. We will mention this in the conclusion. Reference: * (Yang, Feidiao, et al.) "Follow the perturbed approximate leader for solving semi-bandit combinatorial optimization." Frontiers of Computer Science 15 (2021) --- Rebuttal Comment 1.1: Comment: I have read the other reviews and the rebuttal. I am satisfied with the answer provided by the authors. I will not change my score.
Summary: This manuscript studies the asymptotically optimal sample complexity for the best arm identification in stochastic combinatorial semi-bandits, under the fixed confidence setting. The main contribution is the introduction of a computationally efficient algorithm which achieves the asymptotically optimal sample complexity in the high-confidence regime, confirming the conjecture of [Jourdan et al. 2021] on the nonexistence of a statistical-computational gap. The proposed algorithm relies on the change-of-measure lower bound (Eqn. (1)) in [Garivier and Kaufmann, 2016], with two main ingredients: 1. To solve the inner minimization problem, using the Lagrangian form the authors formulate it as a two-player zero-sum game. The minimizing player (choosing a super arm) uses a FTPL algorithm which is oracle-efficient based on a linear maximization (LM) oracle; the maximizing player (choosing the Lagrangian parameter) plays the best response. The standard result of online learning shows that the average plays converge to the minimax solution. Significance: medium. 2. To solve the outer maximization problem, the authors use a perturbed Frank-Wolfe sampling algorithm. First, the objective function is smoothed by a Gaussian kernel to overcome the issue of multiple gradients. Second, the learner computes the estimated gradient for the smoothed objective and plays the best response to the estimated gradient. A stopping rule, as well as forced exploration rules, is also applied in the algorithm. Significance: medium. Strengths: This manuscript proved the nonexistence of the statistical-computational gap in stochastic combinatorial semi-bandits and resolved a conjecture of [Jourdan et al. 2021]. The components in the algorithm design, i.e. the two-player games and the perturbed Frank-Wolfe sampling, are both interesting. Weaknesses: Overall I like this paper and lean towards acceptance. However, I do have some concerns on the significance of the results: 1. The problem in study, i.e. a computationally efficient algorithm for best arm identification in stochastic combinatorial semi-bandits under the fixed confidence setting, is very specific and a bit narrow in scope. The proposed algorithm is very tailored to the current setting, and the results are asymptotic. The motivation of this manuscript seems to be mainly technical curiosity, where the proposed algorithm is too complicated and very unlikely to be used by practitioners. 2. The nonexistence of statistical-computational gap is not very surprising, which slightly lowers the significance of the result. Also, the attained sample complexity is asymptotically optimal only in the high confidence regime and also involves a parameter polynomial in 1/Delta_min. 3. The two-player zero-sum game approach for an oracle-efficient algorithm is nowadays pretty standard, especially when one player is playing FTPL and the other is using best response. So the significance of the first component is medium. 4. The Frank-Wolfe sampling approach, albeit different from [Wang et al. 2021], is also pretty natural. The same also applies to the smoothing technique. The complicated notations and many seemingly arbitrary rules in the algorithm (more on the writing below) make it considerably harder to appreciate the main idea, and also degrade the significance. In addition, I would like to point out that the writing quality of this manuscript is pretty poor. Just list several examples: 1. The LM oracle is not even formally defined in the main text. 2. Algorithm 2 is poorly explained. Why forced exploration when \sqrt(t/|X_0|) \in \bN? Why does the best response to the estimated gradient correspond to a good decision? The definition of \nabla F_{\mu, \eta, n} is deferred to a very later place. What is the intuition behind the stopping rule? What is the beta function in the stopping rule? Many components require explanations. 3. Small places: "would" in Line 220, "in" in Line 257. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: The remainder term in Theorem 2 depends on Delta_min, which is pretty unfortunate, for this quantity could be small in combinatorial bandits. I am wondering if one can argue that this term is always negligible compared with the main term? More specifically, when Delta_min is small, then the optimal sample complexity T^* would also be large; I'm wondering if T^* could always dominate the poly(1/Delta_min) term in the remainder. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Many thanks for your careful review and feedback. About the paper scope. We believe that combinatorial bandits with semi-bandit feedback constitute one of the classical problems in the bandit literature. They find numerous applications, see [11, 12, 14, 16, 23, 36, 43, 50] (references are those from the supplementary material). Refer to the answer to reviewer Vkf2 for details. We will cite a few of these applications in the final version of the paper. About: > the results are asymptotic. Our primary objective was to derive an algorithm that is statistically optimal in the high confidence regime (when $\delta$ goes to 0 – so indeed with asymptotic guarantees) and that runs in polynomial time. We achieved this objective, but our algorithm also exhibits non-asymptotic guarantees. It has a sample complexity polynomial in $K$ in the moderate confidence regime, see Theorem 4 (when $\delta$ does not necessarily tend to 0). Note that no algorithm with both (i) minimal sample complexity in the high confidence regime and (ii) polynomial sample complexity in the moderate confidence regime was known even in the vanilla MAB problem (in the recent paper [6], the authors did not manage to get these guarantees). P-FWS enjoys these guarantees in a setting that is far more challenging than what [6] (or [40]) considers. About: > The nonexistence of statistical-computational gap is not very surprising. We would like to emphasize that our result is new even for the vanilla MAB problem: as mentioned above, there is no existing algorithm whose sample complexity is both asymptotically optimal and is polynomial in $K$ in moderate regime. In addition, for combinatorial semi-bandits, we are the first to show the optimization problem leading to the problem-specific sample complexity lower bound can be solved in polynomial time. The level of complexity of this problem was so far unknown. Based on this result, we can devise an algorithm, P-FWS, that tracks the lower bound in a computationally efficient way. About: > The proposed algorithm is too complicated. Based on existing Julia implementations [21, 57], we implemented the sampling rule of P-FWS within 50 lines of code (this amount of code is comparable to what other sampling rules take), and used 100 lines of code to implement the MCP algorithm. We did not present this implementation in the paper because we found it difficult to compare the performance of our algorithm to that of other algorithms. The latter algorithms are computationally too challenging (it takes too much time to run). Here is an illustrative example. We compare P-FWS to a simplified version of CombGame [36]. The experiments are performed on a Macbook (Apple M1) with 8 cores and 16GB memory. We made $100$ runs to compute the expected sample complexity of the algorithms * On a graph with $K=20$ edges and a decision set consisting of $|X|=21025$ spanning trees and confidence $\delta=0.1$, P-FWS has an expected sample complexity $\tau$ of 1139 samples while CombGame with OFW has an expected sample complexity of 1325 samples but already takes 21 minutes for each run. * Now on a graph of $K=25$ edges and $|X|=0.3$ million spanning trees and confidence $\delta=0.1$, P-FWS expected sample complexity is 2000 samples (and in average, each run takes 17 hours) while any run for CombGame fails to finish within 2 days. Note that the original CombGame algorithm had to be simplified. More precisely, we use our MCP subroutine to perform two components of CombGame, namely the best-response player and the stopping rule. Even with this simplification, CombGame rapidly becomes computationally intractable as the size of the decision set increases. About: > the attained sample also involves a parameter polynomial in $1/\Delta_{\min}$ The dependence of sample complexity in $1/\Delta_{\min}$ is unavoidable: it is easy to derive from Proposition 1 (c) that $T^*(\mu)$ is at least $1/\Delta_{\min}^2$. About: > The two-player zero-sum game approach for an oracle-efficient algorithm is nowadays pretty standard. Our contributions consist in (i) establishing the property that allows one to relate Eq (3) to a two-player zero-sum game and (ii) designing an algorithm that not only converges to the equilibrium but also returns an equilibrium action, even in a setting where one of the domains (the action set of one player) is not convex. About (i), prior works [12, 57] apply Lagrangian multiplier method to derive the closed-form of $f_x(\omega,\mu)$, but as far as we know, no one noticed the linear-concave property of the Lagrangian dual function as we showed in Proposition 1. This property opens the opportunity to relate Eq (3) to a two-player zero-sum game. Now achieving (ii) is non-trivial. In the standard setting [2, 19, 52], the domains for both players are convex, and the objective is just to converge to the equilibrium with low cumulative regret. In our case, for MCP, one of the domains (that of the $x$-player) is combinatorial. Moreover, we wish to return the equilibrium action (which is more difficult that just ensuring low cumulative regret) – this type of convergence is referred to as last-iterate convergence in the literature, see e.g. [20] and other references cited in Section 3.2. About: > The Frank-Wolfe sampling approach is also pretty natural. The Frank-Wolfe approach of [57] cannot be applied here because it would lead to a computationally infeasible algorithm as we explain in the paper. Hence, we had to use and analyze a different smoothing technique. About: > What is the intuition behind the stopping rule? What is the beta function in the stopping rule? The stopping rule basically compares $F_{\hat{\mu}(t-1)}(\omega(t))$ (which represents the distance of the current estimation to the closest bandit model with a different the best action) to a threshold function $\beta$. Please refer to [29, 42] for details. See line 291-292 for possible choices of threshold function $\beta$. --- Rebuttal Comment 1.1: Comment: Thank for for your detailed comment. Some follow-up questions: 1. $1/\Delta_{\min}$ dependence: I know that $T^\star(\mu)$ is at least $1/\Delta_{\min}^2$; what I was asking in my question is that can you show the $\text{poly}(1/\Delta_{\min})$ you derived is always no larger than $T^\star(\mu)$. I mean, if $T^\star(\mu)$ is only larger than $1/\Delta_{\min}^2$ but your $\text{poly}(1/\Delta_{\min})$ term is $1/\Delta_{\min}^{10}$, that's still very unfortunate. 2. Regarding your point (ii) in the two-player zero-sum approach: please specify if your proof of why your $x_e$ works only requires a low cumulative regret (and possibly thanks to the special problem structure in your Lagrangian), or you need to prove the last-iterate convergence. Right now only the cumulative regret is shown in Lemma 3, and I don't know if your proved a hard last-iterate convergence in Theorem 3, or you essentially applied a smart one-line trick to bypass it. --- Reply to Comment 1.1.1: Comment: Thanks for your questions! **About $1/\Delta_{\min}$ dependence.** Looking at (32) in the appendix, the dependence of the term $\Psi(\epsilon,\tilde{\epsilon})$ involved in the moderate confidence regime scales well with $\Delta_{\min}$ (i.e., $\Delta_{\min}^{-2.5}$). But in (32), the trade-off between the term for the high confidence regime (roughly $T^\star(\mu)^{-1}\log(1/\delta)$ and the term $\Psi(\epsilon,\tilde{\epsilon})$ involves selecting $\epsilon$ w.r.t. $\Delta_{\min}$. Hence, at the end, the dependence of $\Psi(\epsilon,\tilde{\epsilon})$ in $1/\Delta_{\min}$ could be a polynomial with relatively high order. We can probably achieve a better dependence in $\Delta_{\min}$ in the moderate confidence regime. We haven’t tried and we let this question for future work. Indeed, our main goal in the paper was to devise an algorithm that is optimal in the high confidence regime and that runs in polynomial time. We achieved this goal and our guarantees in the moderate confidence regime came as a “bonus”. For the moderate confidence regime, we wanted to focus on the dependence in $K$ (the number of basic arms), and we managed to achieve a sample complexity in this regime only polynomial in $K$ (as explained in Appendix B, line 571-572, naïve methods, e.g. applying the algorithm of [57], would lead to a sample complexity exponentially growing with $K$). We would like to finally note that the expected sample complexity in moderate confidence regime and its dependence in $\Delta_{\min}$ is not known, even actually in the vanilla MAB problem. In our setting, an additional difficulty stems from the fact that we need a computationally efficient algorithm; refer to Appendix B for a detailed discussion. **About “two-player zero-sum approach”**. Our approach only requires a low cumulative regret, as explained in line 208-212. Our approximate, $\hat{F}$, converges to the desired minimax value $F_{\mu}(\omega)$ thanks to the following three inequalities: 1. (Lemma 3): $\frac{1}{N}\sum_{n=1}^N g_{\omega,\mu}(x^{(n)},\alpha^{(n)})-\frac{1}{N}\min_{x\neq i^\star}\sum_{n=1}^Ng_{\omega,\mu}(x,\alpha^{(n)}) \le \frac{c_\theta}{\sqrt{N}}$ with probability at least $1-\theta$. 2. (line 208): $\frac{1}{N}\sum_{n=1}^N g_{\omega,\mu}(x^{(n)},\alpha^{(n)})\ge \hat{F}$. 3. (line 209): $\frac{1}{N}\min_{x\neq i^\star}\sum_{n=1}^Ng_{\omega,\mu}(x,\alpha^{(n)}) \le F_{\mu}(\omega)$. As a result, $\hat{F}- F_{\mu}(\omega)\le \frac{c_\theta}{\sqrt{N}}$ with probability at least $1-\theta$. We would like to emphasize that the above approach from low cumulative regret to the returned iterate is specific to the requirement of our problem setting, where the approximate of equilibrium point, $\hat{x}$, has to be an *action* rather than an *average of actions*. This also makes our two-player zero-sum approach not as standard as in other works.
Summary: The paper is about best arm identification with fixed confidence, in combinatorial semi-bandits. The state of the art in that setting is that we have algorithms that have optimal asymptotic sample complexity, but their computational complexity is very large. The authors provide a new algorithm which remediates this problem. The algorithm is asymptotically optimal and has polynomial computational complexity (in the number of arms and parameters of the problem like the minimal gap). The main contribution is a method to compute efficiently the closest alternative to an instance of the combinatorial semi-bandit problem, which is a sub-routine is several BAI algorithms. That method is incorporated into a BAI algorithm based on Frank-Wolfe. Strengths: The algorithm presented is the first to have polynomial computational complexity in combinatorial BAI. This is a significant achievement towards having practical algorithms for that important problem. The sample complexity is also asymptotically optimal. The new method MCP is the result of an interesting reformulation of the problem and a thorough analysis of its properties. Its integration in a Frank-Wolfe algorithm also uses innovative methods compared to the existing FW algorithm for BAI. The algorithms and the results are clearly explained. Weaknesses: The dependence in K, D and other constants is indeed polynomial, but with very large exponents. Some bounds depend on K^15. Since this work presents the first algorithm with polynomial complexity, this is only a mild weakness. The main weakness is that there is no experimental evaluation. - Is the method still very inefficient in practice? Can it actually run on a computer? - Some components of the algorithm seem to be included to make the analysis work, like the comparison of \sqrt{t} with the norm of the empirical mean vector. A natural question is whether they are necessary in practice, or if they are artifacts of the analysis. This could have been studied empirically. - The paper does not provide only a computationally efficient subroutine for a known algorithm, but also proposes a new BAI algorithm based on Frank-Wolfe (another FW based method already exists, but it is not exactly the same). The sample complexity of the new method should then also be evaluated. Perhaps an innocuous looking modification made it very bad in practice, although the asymptotic guarantees are good? Technical Quality: 3 good Clarity: 3 good Questions for Authors: Can you provide an empirical analysis of the proposed method and compare it to other algorithms? Both for sample and computational complexity. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The limitations of the paper are adequately discussed. No concern about societal impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Many thanks for your careful review and positive feedback. About the comment: > Some components of the algorithm seem to be included to make the analysis work, like the comparison of \sqrt{t} with the norm of the empirical mean vector. We agree that naturally, some of the components of our algorithm, like the comparison to $\sqrt{t}$, have been precisely designed so that we obtain the desired performance guarantees. Now it is indeed interesting to see whether in practice, the algorithm would work tuning down these components. We leave this sensitivity analysis for future work. About the absence of experiments. In this paper, we wanted to highlight its strong theoretical contributions: for combinatorial bandits, our algorithm P-FWS is the first polynomial-time algorithm that is statistically optimal in the high confidence regime. In addition, it has a sample complexity polynomial in $K$ in the moderate confidence regime. We implemented P-FWS. But we found it difficult to compare its performance to that of other algorithms, for these algorithms are computationally too challenging (it takes too much time to run). Here is an illustrative example. We compare P-FWS to a simplified version of CombGame [36]. The experiments are performed on a Macbook (Apple M1) with 8 cores and 16GB memory. We made 100 runs to compute the expected sample complexity of the algorithms * On a graph with $K=20$ edges and a decision set consisting of $|X|=21025$ spanning trees and confidence $\delta=0.1$, P-FWS has an expected sample complexity $\tau$ of $1139$ samples while CombGame with OFW has an expected sample complexity of 1325 samples but already takes 21 minutes for each run. * Now on a graph of $K=25$ edges and $|X|=0.3$ million spanning trees and confidence $\delta=0.1$, P-FWS expected sample complexity is 2000 samples (and in average, each run takes 17 hours) while any run for CombGame fails to finish within 2 days. Note that the original CombGame algorithm could not be implemented – it takes way too much time to finish. We had to simplify it. More precisely, we use our MCP subroutine to perform two components of CombGame, namely the best-response player and the stopping rule. Even with this simplification, CombGame rapidly becomes computationally intractable as the size of the decision set increases.
Summary: The paper introduces a computationally efficient algorithm, P-FWS (Perturbed Frank-Wolfe Sampling), designed for best arm identification in Combinatorial semi-bandits. This algorithm operates in polynomial time and offers minimal sample complexity guarantees or polynomial sample complexity, depending on the problem regime. P-FWS is an online estimation algorithm that aims to achieve the information-theoretic lower bound at each stage. It leverages the Frank-Wolfe optimization routine as its core component, relying on linear maximization at each step. Overall, the paper presents an innovative and efficient approach to address the best arm identification problem in Combinatorial semi-bandits. Strengths: One strength of the paper lies in its exploration of a novel and important problem within the bandit literature. The utilization of Perturbed Frank-Wolfe as the central algorithm for estimating the lower bound is a unique and valuable contribution that has the potential to complement and enhance existing works in the field. The paper showcases strong theoretical rigor by providing detailed proofs for all claims, ensuring the reliability and robustness of the presented findings. Overall, the paper's innovative problem formulation and theoretical foundation make it a valuable addition to the broader bandit literature. Weaknesses: One weakness of the paper is its challenging readability and navigation. The excessive use of definitions, constants, and interruptions in the flow of the paper makes it difficult to understand and follow. This overload of information can overwhelm the reader and hinder comprehension. Furthermore, the practical motivation and real-world use cases of the proposed algorithms are discussed in a limited scope. It would be beneficial to expand on the potential applications and provide more examples of how such algorithms can be applied in real-world scenarios. Additionally, the absence of simulations in the main paper is notable. Although the paper highlights the computational efficiency of the proposed algorithms, there is a lack of performance comparison with other non-efficient methods. Including such comparisons would provide valuable insights into the comparative advantages and disadvantages of the proposed approach. Overall, addressing these weaknesses would enhance the clarity, applicability, and overall impact of the paper. Technical Quality: 4 excellent Clarity: 2 fair Questions for Authors: Would this paper have any significance or relevance to the part of the bandit literature that focuses on tracking the lower bound and sampling based on such lower bound estimations, which are often computationally infeasible? Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 2 fair Contribution: 4 excellent Limitations: No potential negative societal impact discussed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Many thanks for your careful review and feedback. About the presentation and readability, thanks for your feedback! We had chosen to write a long introduction to summarize all the ideas and contributions of the paper, and to help the reader understand the paper structure. This “unconventional” presentation may have been confusing. We will revert to a more classical and simplified structure. Regarding the practical motivation, we did not mention many of them because combinatorial bandits have been extensively studied and applied in the past. In particular, applications of combinatorial semi-bandits [11, 12, 14, 16, 23, 36, 43, 50] (references are those from the supplementary material) include: * The online ranking problem [23] of identifying the $m$ most relevant (out of a total of $K$) items can be modeled in our setting with $m$-sets or $m$-permutations as the decision set. * The routing problem [14, 43] of finding the routing tree with the lowest expected latency corresponds to our case with the set of all possible spanning trees in an ISP (Internet Service Provider) network as the decision set. * The loan-assignment problem [43], which aims to identify the matching between the lending institute and the lenders such that the expected paid rate is the highest, can be formulated by setting the decision set as the set of all possible perfect matchings in a bipartite graph. * The path planning problem [36] of finding the path from an origin $s$ to a destination $t$ with minimal expected traveling time can be modeled by using the set of all possible $s$-$t$ paths in the given directed acyclic graph as the decision set. We will cite a few of these applications in the final version of the paper. About your question: > Would this paper have any significance or relevance to the part of the bandit literature that focuses on tracking the lower bound and sampling based on such lower bound estimations, which are often computationally infeasible? Indeed, this is exactly what we are addressing in this paper. For the case of combinatorial bandits, the problem specific lower bound solves an intricate optimization problem whose level of complexity was so far unknown. We prove that we can solve this problem in polynomial-time (this is the first result of this kind). As a consequence, we are able to devise an algorithm that tracks the lower bound in a computationally efficient way. Our algorithm is statistically optimal and runs in polynomial time, and this is the first algorithm to combine these two properties! We would like to add that our algorithm also exhibits a sample complexity polynomial in $K$ in the moderate confidence regime (when $\delta$ does not necessarily tend to 0). Note that no algorithm with both (i) minimal sample complexity in the high confidence regime and (ii) polynomial sample complexity in the moderate confidence regime was known even in the vanilla MAB problem (in the recent paper [6], the authors did not manage to get these guarantees). P-FWS enjoys these guarantees in a setting that is far more challenging than what [6] (or [40]) considers. About the absence of experiments. In this paper, we wanted to highlight its strong theoretical contributions – as listed in the previous paragraph. We implemented our algorithm P-FWS. But we found it difficult to compare its performance to that of other algorithms, for these algorithms are computationally too challenging (it takes too much time to run). Here is an illustrative example. We compare P-FWS to a simplified version of CombGame [36]. The experiments are performed on a Macbook (Apple M1) with 8 cores and 16GB memory. We made 100 runs to compute the expected sample complexity of the algorithms * On a graph with $K=20$ edges and a decision set consisting of $|X|=21025$ spanning trees and confidence $\delta=0.1$, P-FWS has an expected sample complexity $\tau$ of 1139 samples while CombGame with OFW has an expected sample complexity of 1325 samples but already takes 21 minutes for each run. * Now on a graph of $K=25$ edges and $|X|=0.3$ million spanning trees and confidence $\delta=0.1$, P-FWS expected sample complexity is 2000 samples (and in average, each run takes 17 hours) while any run for CombGame fails to finish within 2 days. Note that the original CombGame algorithm could not be implemented – it takes way too much time to finish. We had to simplify it. More precisely, we use our MCP subroutine to perform two components of CombGame, namely the best-response player and the stopping rule. Even with this simplification, CombGame rapidly becomes computationally intractable as the size of the decision set increases. --- Rebuttal Comment 1.1: Comment: Dear reviewer, we were wondering whether you read our rebuttal and whether you find that it clarified our contributions. Please let us know! We would be happy to answer further questions, if any (the discussion phase ends August 21). Thank you. Best wishes.
null
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
A Riemannian Exponential Augmented Lagrangian Method for Computing the Projection Robust Wasserstein Distance
Accept (poster)
Summary: This paper considered the problem of computing the projection robust Wasserstein distance between two discrete probability measures. By formulating this problem as an optimization problem over the product space of the Stiefel manifold and the Euclidean space with additional nonlinear inequality constraints, the authors proposed the so-called Riemannian exponential augmented Lagrangian method (REALM) to solve them. Further, the convergence of REALM was given. For solving the subproblems in REALM, the authors designed the so-called inexact Riemannian Barzilai-Borwein method with Sinkhorn iteration (iRBBS), where stepsizes are adaptively chosen. As the authors claimed, the complexity of iRBBS to attain an $\epsilon$-stationary point of the original PRW distance problem matches the best known iteration complexity result. Strengths: Clearly, this paper generalized some results of the references [26,32]. To the reviewer's best understanding, the core idea is to find feasible points that satisfy the first-order necessary conditions of problem (6) or (11). To solve a three-block optimization problem, the paper transformed it as alternatively minimizing the Stiefel manifold variable $U$ and the Euclidean variables $\alpha$ and $\beta$, where the later can be solved by the well-established inexact gradient methods. Overall, this paper is well-written and mathematically solid. Weaknesses: (1) Some mathematical details/arguments are missing (please point out if they are added in the supplementary material). For examples, Line 115: why the minimizer $(x^*,y^*)$ of the problem (9) must satisfy the relationship $y^*=\eta\log(\Vert\zeta_\eta(x^*,11^T)\Vert_1)$? Line 124, how to calculate the gradient of $\mathcal{L}_{\eta_k}(x,\pi^k)$? (2) It seems that the update of $\theta_{t}$ in Algorithm 2 is missing because $\theta_{t+1}$ appears in the inexactness criterion (19b). Please correct me if not. (3) The paper may require some ablation studies in the numerical experiments. The authors claimed REALM always outperforms the Riemannian exponential penalty approach since it could avoid too small penalty parameters in many cases (Lines 70-71). What's the range for ``too small" penalty parameters? And which cases? Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. Line 65: One ``method" in this line is redundant. 2. Line 91: There is no definition for $\nabla f(U)$. The metrics on the left and on the right side are different. 3. Line 145, Proposition 2.6: How to come out with the $x^s$? What is the motivation for the definition of $x^s$? 4. Line 252: Should be $\theta_t$ in the place of $\theta$? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Many thanks for your positive comments on the basic results achieved in the paper and your helpful comments and suggestions. The following are our point-to-point responses to your comments. - Response to Weakness 1. Thank you for your comments. We acknowledge that, due to the page limit, some mathematical details/arguments are missing in the paper. We shall add them either in the revised paper or in the supplementary material. Below let us first provide the detailed derivations for the two examples you mentioned in your above comments. - Due to the optimality of $(\\mathbf{x}^*, y^*)$, we have $\nabla\_{y} \widetilde{\mathcal{L}}\_{\eta}(\mathbf{x}^*, y^*, 11^T) =0.$ Recalling that $\widetilde{\mathcal{L}}\_{\eta}(\mathbf{x}^*, y^*, 11^T) = r^T \alpha + c^T \beta + y + \eta \sum_{ij} \exp\left (- \frac{\varphi(\mathbf{x})\_{ij} + y}{\eta} \right),$ we obtain $\nabla\_{y}\widetilde{\mathcal{L}}\_{\eta}(\mathbf{x}^*, y^*, 11^T) = 1 - \sum\_{ij} \exp\left (- \frac{\varphi(\mathbf{x^*})\_{ij} + y^*}{\eta} \right) = 0$, which gives $y^* = \eta \log \sum\_{ij} \exp\left (-\frac{\varphi(\mathbf{x^*})\_{ij}}{\eta} \right) = \eta \log (\\|\zeta\_{\eta}(x^*, 11^T)\\|\_1).$ - By the expression \\[\mathcal{L}\_{\eta}(\mathbf{x}, \pi) = r^T \alpha + c^T \beta + \eta \log \sum\_{ij} \pi\_{ij} \exp\left( -\frac{\alpha\_i + \beta\_j + \langle M\_{ij}, UU^T\rangle} {\eta}\right) \\] and employing the chain rule, we can compute the gradient of $\mathcal{L}\_{\eta_k}(\mathbf{x}, \pi^k)$ with respect to $U$ as \\[\nabla\_U \mathcal{L}\_{\eta\_k}(\mathbf{x}, \pi^k) = - 2 V\_{\phi\_{\eta\_k}(\mathbf{x}, \pi^k)} U\\] with $V\_{\phi_{\eta\_k}(\mathbf{x}, \pi^k)} = \sum\_{ij} [\phi\_{\eta\_k}(\mathbf{x}, \pi^k)]\_{ij} M\_{ij}$ (and $V_{\pi}$ for a given matrix $\pi$ is defined in Line 85 of the paper). Moreover, the gradient of $\mathcal{L}_{\eta_k}(\mathbf{x}, \pi^k)$ with respect to $\alpha$ and $\beta$ can be found in Lines 124 and 125, respectively. - Response to Weakness 2. Thanks for this important comment. Theoretically, Algorithm 2 allows $\theta_t$ to be any nonnegative number, which is mainly due to the nice property shown in Lemmas B.1 and B.2 of the Sinkhorn iteration. Please see also Theorem B.5 for the convergence result of Algorithm 2. However, it would be better to clearly specify the choice/update of $\theta_t$ in Algorithm 2, as you suggested. In the revision, we shall add the update of $\theta_t$, namely, $$\theta_{t+1} = \max\\{\theta/(2\\|C\\|_\infty) \\|\xi^t\\|_F, \epsilon_2\\},$$ in Line 6 of Algorithm 2 and include the phrase ``Set $\theta_0 = 1$ and choose $\theta > 0$'' in its input part. The response also relates to your Question 4 below. - Response to Weakness 3. Thanks for your important comment. There have already been systematic studies in the literature on the advantage of the exponential ALM over the exponential penalty approach in terms of better numerical stability due to its ability of avoiding too small penalty parameters; please check Refs. [18], [45] and [50] in the paper for your reference. However, it is hard to give an explicit estimate on the range ``too small''. Based on our numerical experience, we have found that when $\eta_k\leq 1/500\max\\{||r||\_\infty,||c||\_\infty\\}$ or $\eta_k \leq 1/900 \\|C(U^t) - \eta_k \log \pi^k\\|_{\mathrm{var}}$ (as shown in Line 283 and Line 284, respectively), the penalty parameter would be considered too small (when REALM is used to solve the PRW distance problem). In such cases, we recommend performing the Sinkhorn iteration with log-domain stabilization, as shown in Eq. (21) and Line 301. - Response to Question 1. Thanks. We shall correct this typo in the revision. - Response to Question 2. Thank you for bringing this to our attention. You are correct, and we apologize for the oversight in not making this point clear. The notation $\nabla f(U)$ means the Euclidean gradient of $f$ with respect to $U$. Usually, the metrics on the two sides of $\langle \mathrm{grad} f(U), \xi \rangle = \langle \nabla f(U), \xi \rangle$ are different: the left-hand side should be $\langle \cdot, \cdot \rangle_U$ which is a smoothly varied inner product on the tangent space at $U$. Since the Stiefel manifold is an embedded manifold of a Euclidean space $\mathcal{E}$, we can simply choose $\langle \cdot, \cdot \rangle_U$ as the standard inner product in $\mathcal{E}$, which coincides with the metric in the right-hand side. We will clarify this issue in the revision. - Response to Question 3. Thanks for this important comment. It comes out with an observation that \begin{align} &r^T (\alpha + \upsilon_1 \mathbf{1}) + c^T (\beta + \upsilon_2 \mathbf{1}) + \eta \log \sum_{ij} \pi_{ij} \exp\left( -\frac{(\alpha_i + \upsilon_1) + (\beta_j + \upsilon_2) + \langle M_{ij}, UU^T\rangle} {\eta}\right) \\\\ ={}& r^T \alpha + c^T \beta + \eta \log \sum_{ij} \pi_{ij} \exp\left( -\frac{\alpha_i + \beta_j + \langle M_{ij}, UU^T\rangle} {\eta}\right), \end{align} where the equality uses the fact that $r^T \mathbf{1} = c^T \mathbf{1} = 1$. Hence, by carefully choosing $\upsilon_1$ and $\upsilon_2$, we can derive $\mathbf{x}^s$ and the corresponding results in Proposition 2.6. Moreover, the first two requirements in Eq. (14) are essential to establish the boundness of $\alpha^k$ and $\beta^k$, which in turn ensures the convergence of REALM. This is the main motivation behind defining such $\mathbf{x}^s$. - Response to Question 4. Thank you for your comment. The parameter $\theta$ is a pre-selected constant in the proposed iRBBS algorithm. By choosing different values of $\theta$, we obtain different versions of iRBBS, denoted as iRBBS-$\theta$. Usually, a smaller $\theta$ means that the subproblem is solved more accurately. We will clarify the meaning of $\theta$ in Line 252 to avoid any potential confusion. This clarification will be included in the revision. --- Rebuttal Comment 1.1: Comment: Thank you for your rebuttal. I have now understood your detailed explanations. I would like to keep my recommendation.
Summary: This paper reformulates the projection robust Wasserstein distance as an optimization problem over the product of the Stiefel manifolds and a subset of a Euclidean space. A Riemannian exponential augmented Lagrangian method (REALM) is proposed to solve this problem. The proposed method is empirically more stable than the existing Riemannian exponential penalty-based approach. A Riemannian BB method with Sinkhorn iteration (iRBBS) is used for the subproblem. The iteration complexity of iRBBS is given. Numerically, the proposed method outperforms the existing methods. Strengths: (1) A method (REALM) is proposed and its global convergence is given. (2) An inexact Riemannian BB method with Sinkhorn is developed and its iteration complexity is derived. Such a result is interesting by itself. (3) From the numerical comparison, the proposed method is most efficient. Weaknesses: The existing methods have iteration complexities for the corresponding algorithms. This paper only gives the iteration complexity for the subproblem. Technical Quality: 3 good Clarity: 3 good Questions for Authors: (1) Can the authors give the iteration complexity for the overall algorithm? What is the main difficulty here? (2) What is the dominated computational time in the proposed algorithm and the compared algorithms? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors pointed out an important limitation in the theoretical analysis, which is the lack of a lower bound of \eta_k. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: - Response to Question 1. Many thanks for this insightful comment. Regarding the current overall Algorithm 1, we were not able to establish the iteration complexity due to the following two main difficulties: (i) characterizing the connection between the two complementarity measures $\\|W^k\\|\_F$ and $\langle \pi^k, Z(\mathbf{x}^k)\rangle$ at the approximate stationary point of the subproblem; and (ii) establishing the relationship between $\eta_k \tilde{\pi}^{k+1}\_{ij}$ and $\varphi(\mathbf{x}^k)\_{ij}$. However, thanks to Theorem 2.4, we can now slightly modify Algorithm 1 to establish the iteration complexity. More specifically, let $e^k = \langle \tilde \pi^{k}, Z(\mathbf{x}^k)\rangle$, where $\tilde \pi^{k}: = \mathsf{Round}(\phi_{\eta_k}(\mathbf{x}^k, \pi^k), \Pi(r,c))$ is a feasible matrix returned by the rounding procedure mentioned in Theorem 2.4. By modifying the ``if'' condition in Line 6 of Algorithm 1 as \\[ \\|W^{k}\\|\_F \leq \gamma\_W \\|W^{k-1}\\|\_F \\quad \\mathrm{and}\\quad e^k \leq \gamma_W e^{k-1}, \\] we can establish the iteration complexity of the whole exponential ALM without compromising its global convergence. By leveraging the connection between the approximate stationary points of the subproblem and the original problem as proven in Theorem 2.4, we can ensure that the algorithm will terminate within at most \\[ \\mathcal{O}\\left(\\max\\left\\{\\log \\frac{1}{\epsilon\_1}, \\log \\frac{1}{\epsilon\_2}, T\_k\\right\\}\\right) \\] iterations, where $T_k := \min \\{k \mid \varrho_k \leq \epsilon_c\\}$. We will incorporate these improved results in the revised version to address your concerns. Thank you once again for your valuable comment. - Response to Question 2. Thanks very much for this important comment. The main computational cost in the proposed iRBBS and R(A)BCD proposed by (Huang et al. 2021a) lies in the following steps: - Compute the inexact Riemannian gradient $\xi^t$, whose cost is $\mathcal{O}(ndk + n^2k + dk^2)$. - Update $U^{t+1} = \mathrm{Retr}_{U^t}(-\tau_t \xi^t)$, whose cost is $\mathcal{O}(dk^2)$. - Compute the matrix $A \in \mathbb{R}^{n \times n}$ with $A_{ij} = \pi_{ij}^k \exp(-\langle M_{ij}, UU^T\rangle/\eta_k)$ (shown in Line 255), whose cost is $\mathcal{O}(n^2)$. - Perform the Sinkhorn iteration (21) or (27), whose cost is $\mathcal{O}(n^2)$. Note that the cost of verifying whether the linesearch condition (25) holds is low since $\mathcal{L}(\mathbf{x}^{t+1})$ is always equal to $r^T \alpha^{t+1} + c^T \beta^{t+1}$ due to $\\|\zeta^{(\ell)}\\|_1 = 1$, as shown below Eq. (21). - Response to Limitations. Thanks very much for pointing out this important issue. Now we can overcome this limitation by assuming the Riemannian versions of the following three conditions, including the linear independence constraint qualification, the strict complementarity condition, and the second-order sufficient condition. This extension builds upon the results of Echebest et al. (2016) (listed as Ref. [18] in the paper). We will incorporate this result and highlight the significance of considering these Riemannian conditions in the revision. --- Rebuttal Comment 1.1: Comment: Thanks for the clear discussions. I increased the score to 5.
Summary: The authors first reformulate the computation of the PRW distance as an optimization problem over the Cartesian product of the Stiefel manifold and the Euclidean space with additional nonlinear inequality constraints. And then they also propose a Riemannian exponential augmented Lagrangian method (REALM) for solving the problem Strengths: 1. The authors propose a Riemannian exponential augmented Lagrangian method (REALM) method to efficiently and faithfully compute the PRW distance and establish the global convergence of REALM in the sense that any limit point of the sequence generated by the algorithm is a stationary point of the original problem 2. To efficiently solve the subproblem in REALM, the authors also propose a novel and practical algorithm, namely, the inexact Riemannian Barzilai-Borwein (BB) method with Sinkhorn iteration (iRBBS). Weaknesses: 1. The authors should provide a proof sketch for their main theoretical analysis in this paper. 2. The authors should add more experimental results to verify their theoretical results and their algorithms. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: See the above setion Confidence: 1: Your assessment is an educated guess. The submission is not in your area or the submission was difficult to understand. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: See Weaknesses Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: - Response to Question 1. Many thanks for your suggestion. We agree with you that it is always helpful to provide a proof outline or a preview of the proof before delving into detailed theoretical analysis. We shall take your suggestion into account when we revise the paper. - Response to Question 2. Thanks for this comment. We shall try to provide more simulation results to better verify the obtained theoretical results. --- Rebuttal Comment 1.1: Comment: Thank you for your rebuttal. The rebuttal has clarified my questions and I decided to keep my score. --- Rebuttal 2: Comment: Dear Reviewer X3cc: can you take a look at the authors' rebuttal, and see if your comments are addressed?
Summary: This work proposes a new method, called REALM, to compute the projection robust Wasserstein (PRW) distance. The method REALM is an extension of the exponential augmented Lagrangian method to the Riemannian space. The convergence of REALM is established. To solve a subproblem during REALM, this work proposes an inexact Riemannian Barzilai-Borwein method with Sinkhorn iteration (iRBBS). The complexity rate of iRBBS to solve the subproblem, whose solution is an $(\epsilon_1,\epsilon_2)$-stationary point of the PRW problem, matches the existing works in the literature. This work claims the robustness in terms of the parameter-tuning in their proposed algorithms. Strengths: This work proposes a new method to compute the PRW distance with solid theoretical guarantees. Weaknesses: The complexity rate matches the existing rate. The idea of reformulating the PRW distance computing problem into a Riemannian optimization setting is not new, e.g. Huang et al. 2021. The claimed easier parameter-tuning part seems to need more elaboration, either from the perspective of theory, or from numerical evidences. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: Hello authors, I have the following questions, 1. Generally speaking, the augmented Lagrangian method also suffers from the poor choice of penalty parameters, e.g., see Curtis et al. 2015. In your work, it claims that REALM can "potentially avoid too small penalty parameters". I was wondering if you could provide some intuition here (to explain why) or explicitly point out any efforts on the algorithmic design to have this property. 2. When you extend the exponential ALM, you claim it is a nontrivial extension because the specific problem structure encourages the specific conditions (14) and (16). Besides this, is there anything significantly nontrivial? 3. The iRBBS is discussed in Part 3. In particular, you mention the cost of updating $\alpha$ and $\beta$ is much less than that of updating $U$. This feature appears to be a valuable addition to your framework (17). Could you confirm my understanding? Is it possible to elaborate/support your proposed iRBBS, e.g., with respect to its implementation simplicity, to make your work more distinctive from existing literature. Confidence: 1: Your assessment is an educated guess. The submission is not in your area or the submission was difficult to understand. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: In the Euclidean space, some existing works consider the boundedness of the penalty parameters, e.g., see Echebest et al. 2015. While the additional (Riemannian, inequality) constraints might make situation totally different, I was wondering if those existing literature could shed light on further investigating the parameters settings. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Many thanks for your positive comments on the basic results achieved in this paper and your helpful comments and suggestions. - Response to Weakness. Thanks for your comments. We want to take this opportunity to elaborate more on the nonmonotone linesearch condition (25) to clarify this issue from the theory perspective. In particular, the interested problem (11) is a three-block optimization problem and Huang et al. 2021 proposed to solve it by R(A)BCD. The current paper treats it as a one-block optimization problem with $U$ as the only variable, as shown in problem (17). The new perspective allow applying the (inexact) gradient algorithm to solve problem (17); see the proposed iRBBS in Algorithm 2. The key difference here is that the stepsizes in iRBBS can be adaptively chosen with the help of the linesearch condition (25), but tuning the stepsizes for updating $U$ is not easy for R(A)BCD. This is one of the two main contributions of the current paper. The other benefits of treating the interested problem as a one-block optimization problem as in (17) can be found in our responses to your Question 3. - Response to Question 1. Thanks for your important comment. The intuition in using the proposed exponential ALM lies in its equivalence to a proximal point method for solving the dual problem, incorporating an entropy function. The estimate of the multiplier matrix $\pi^k$ can be viewed as the proximal point center, and a varying center $\pi^k$ typically outperforms a fixed center $\pi^k \equiv 11^T$ in the proximal point method. To further improve the numerical performance (in particular the numerical stability) of the proposed REALM, we have carefully exploited the structure of the PRW distance problem and used the problem's special structure into the general exponential ALM framework. More specifically, the first algorithmic innovation is the conditions in (14), which are crucial to guarantee the boundness of the iterates. The second algorithmic innovation is the simple rules of updating the penalty parameters and the multiplier matrices, which take into account of the specific connections between approximate stationary points of the subproblem and the original problem. The latter is of great importance to both the global convergence and the numerical performance/stability of the proposed REALM. - Response to Question 2. Thank you for this important comment, which gives us an opportunity to clarify this point. In addition to the two specific conditions (14) and (16), the following two differences distinguish our proposed REALM from the existing exponential ALM (e.g., the one proposed in Echebest et al. 2016): - **Measure of complementarity.** The measure of complementarity used in our proposed REALM differs from that employed in Echebest et al. 2016. More specifically, the measure in our REALM is motivated by the direct use of the complementarity condition adopted in the classical (quadratic) ALM, while the measure of complementarity used in Echebest et al. 2016 can be regarded as ``an appropriate measure for the exponential case" as mentioned in the second paragraph on Page 96 of their paper. - **Conditions on global convergence.** To guarantee the global convergence of the exponential ALMs, some (strong) constraint qualifications and the boundness of the iterates generally need to be assumed; see Proposition 2.1 and Theorem 2.1 in Echebest et al. 2016 for the corresponding results. The current paper extends the exponential ALMs to solve a class of inequality-constrained nonlinear optimization problems with manifold constraints. Based on our best knowledge, this is the first Riemannian version of the exponential ALM. In particular, for our considered PRW distance problem, we can prove the boundness of the iterates generated by the proposed REALM without making the assumption and establish the global convergence of the proposed REALM without explicitly dealing with the constraint qualification assumption. This advantage is mainly due to the aforementioned *essential* changes in the proposed REALM (compared with the existing methods), i.e., specific conditions (14) and (16) on the solution of subproblems and the adopted measure of complementarity. - Response to Question 3. Your understanding is absolutely correct, and we truly appreciate your valuable suggestion. One motivation of proposing iRBBS is that the condition number of the one-block optimization problem could be smaller than that of the three-block optimization problem. Consequently, employing an inexact gradient descent method to solve the one-block optimization problem would be more efficient than simply using the block coordinate descent approach for solving the corresponding three-block optimization problem. Moreover, the inexact gradient method also provides important and valuable insights into adaptively choosing the stepsize for updating $U$ via the nonmontone linesearch. Last but not least, the fact that updating $\alpha$ and $\beta$ is much easier than updating $U$ also contributes to the superior performance of iRBBS compared to existing block coordinate descent approaches in practice, as you correctly pointed out. Otherwise, there is no necessity to update $\alpha$ and $\beta$ multiple times while only updating $U$ once in each iteration of the proposed iRBBS. - Response to Limitations. Thank you very much for this insightful comment. After a careful and thorough investigation, we can now establish the boundness of the penalty parameter for the proposed REALM by leveraging the analysis in Echebest et al. 2016. To achieve this, we need to consider the Riemannian versions of the following three conditions: the linear independence constraint qualification, the strict complementarity condition, and the second-order sufficient condition. However, it remains unclear under which (easily checkable) conditions the above three conditions hold true, which requires further investigation. --- Rebuttal Comment 1.1: Comment: Thank you for the clarification, which addresses most of my concerns. --- Rebuttal 2: Comment: Dear Reviewer KEsx: can you take a look at the authors' rebuttal, and see if your comments are addressed?
null
NeurIPS_2023_submissions_huggingface
2,023
Summary: this paper proposed a Riemannian Exponential Augmented Lagrangian Method for solving the projection robust wasserstein distance problem. the authors claimed two contributions compared with the previous works: 1) the proposed algorithm is much more stable as \eta needs to be small in previous works. 2) a Riemannian Barzilai-Borwein method was proposed to adaptively fine tune the step size. numerical experiments compared with previous works were reported. Strengths: 1. the author proposed to do multiple sinkhorn steps and one riemannian gradient step in each iteration because sinkhorn steps are much more cheaper. this idea makes sense and the author provided convergence guarantee for the proposed algorithm. 2. numerical experiments show the advantages of the proposed algorithm compared with RBCD. Weaknesses: 1. the main concern is that the author claimed that the proposed algorithm is numerically more stable than RGAS/RBCD because both of them require \eta to be very small. however, I'm not convinced by simply saying "Based on the knowledge that the exponential ALM is usually more stable than the exponential penalty approach ... ". are there any systematic study to support this claim? the proposed ALM requires the penalty parameter \eta to be exponentially decreasing. such an \eta appears in the denominator of an exponential term (e.g. eq 7). wouldn't this cause the same numerically instability issue? besides, in the proposed algorithm, the author applies the sinkhorn iteration, which will naturally introduces the numerical issues? 2. the writing of this paper needs to be further improved. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: see the weakness Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: see the weakness. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Many thanks for pointing out the weakness, which gives us an opportunity to clarify it. First, there are indeed some systematic studies comparing exponential ALM and exponential penalty approaches. We wish to highlight several main results from the existing literature on this matter: - Convex case. - Tseng and Bertsekas [TB93], listed as Ref. [45] in our paper, proved that the penalty parameter $\eta_k$ in the exponential ALM (with subproblems solved exactly) can be chosen as any positive number. This is in sharp contrast to the exponential penalty approach, where the penalty parameter is typically required to be very small or go to zero. Please refer to Proposition 3.1 in [TB93]. Additionally, [TB93] established the linear convergence rate of the exponential ALM for solving the linear programming; see Proposition 4.1 therein. - More recently, Yang and Toh [YT22], listed as Ref. [50] in our paper, proposed an inexact version of the Bregman proximal point algorithm, which is equivalent to the exponential ALM with subproblems solved inexactly. They proved that the penalty parameter can be chosen as any positive number. Moreover, they claimed that their proposed approach exhibits greater stability than the exponential penalty approach (when applied to solve the standard optimal transport problem). In particular, they mentioned in the paper: ``*our iBPPA with the entropic proximal term can bypass some numerical instability issues that often plague the popular Sinkhorn's algorithm used in the OT community. This is because in contrast to Sinkhorn's algorithm, our iBPPA does not require the proximal parameter to be very small in order to obtain an accurate approximate solution, as evident from our numerical results*" - General nonlinear case. - Under the regular condition, the strict complementarity condition, and the second-order sufficient condition, Dussault [Dus04] demonstrated that the exponential ALM exhibits one-step superlinear convergence with a rate of $4/3$ (see Theorems 1.1 and 2.1 therein). In contrast, the exponential penalty approach takes two-step superlinear convergence with the same rate (see the second paragraph on Page 475). Additionally, Dussault [Dus04] provided a detailed example illustrating that the exponential ALM can be more stable than the exponential penalty approach. For further information and specific results, please refer to Tables 3.1-3.3 in the referenced paper. - Recently, Echebest et al. [ESS16], listed as Ref. [18] in our paper, showed that the penalty parameter $\eta_k$ *can be bounded away from zero* under the conditions of the linear independence constraint qualification, the strict complementarity condition, and the second-order sufficient condition. However, no evidence indicating that the exponential penalty approach enjoys a similar property under these conditions. Based on the aforementioned studies on the exponential ALM, we stated in our paper ``Based on the knowledge that the exponential ALM is usually more stable than the exponential penalty approach...". However, we apologize for not providing more detailed explanations and justifications in the current version. We will certainly include the corresponding discussion in the revised version. Second, we acknowledge that the exponential ALM might face a similar numerical instability issue as the exponential penalty approach if the penalty parameter $\eta_k$ becomes too small. However, we hope the following explanations could convince you that the exponential ALM is potentially more stable than the exponential penalty approach for the PRW problem. - To address the potential numerical instability caused by the small penalty parameter $\eta_k$, we propose to update $\eta_{k+1}$ as $\eta_{k+1} = \max\\{\gamma_{\eta} \eta_k, \eta_{\min}\\}$ as in Line 279. This update scheme ensures that the penalty parameter does not become too small during the iterations. We observed from our numerical experiments in Table 2 and Tables 4-5 that, compared to the exponential penalty approach, the exponential ALM can accommodate larger values of $\eta_{\min}$. This advantage is primarily attributed to the update of the multiplier matrices. By allowing for a larger $\eta_{\min}$, the exponential ALM is less likely to suffer from the numerical instability due to the use of a small penalty parameter, as observed in the exponential penalty approach. - Theoretically, we can extend the analysis in [ESS16] to prove that the penalty parameter $\eta_k$ in the proposed Riemannian exponential ALM will also be bounded away from zero if the Riemannian versions of the three conditions hold, including the linear independence constraint qualification, the strict complementarity condition, and the second-order sufficient condition. We will include additional remarks in the revision to provide further clarity on this point. It should be mentioned that these three conditions might not be easy to check since we do not have prior knowledge of the solution. However, based on our numerical results, we believe that for certain instances, the three conditions indeed hold. Third, we shall further enhance the clarity of our writing in the revision to ensure that the results are presented more clearly and the paper becomes easier to follow. The corresponding references are listed in order. - [Dus04] J.-P. Dussault. Augmented non-quadratic penalty algorithms. *Math. Program.*, 99(3):467-486, 2004. - [ESS16] N. Echebest, M. D. Sánchez, and M. L. Schuverdt. Convergence results of an augmented Lagrangian method using the exponential penalty function. *J. Optim. Theory Appl.*, 168:92-108,2016. - [TB93] P. Tseng and D. P. Bertsekas. On the convergence of the exponential multiplier method for convex programming. *Math. Program.*, 60(1-3):1-19, 1993. - [YT22] L. Yang and K.-C. Toh. Bregman proximal point algorithm revisited: A new inexact version and its inertial variant. *SIAM J. Optim.*, 32(3):1523-1554, 2022. --- Rebuttal Comment 1.1: Title: reply to authors' rebuttal Comment: thanks for the clarification. I've increased my score to 5
null
null
null
null
null
null
Uniform-in-Time Wasserstein Stability Bounds for (Noisy) Stochastic Gradient Descent
Accept (poster)
Summary: This paper provides new stability bounds for SGD: they rely on perturbation theory for Markov Chains and on the ergodicity of SGD. Strengths: - Well written, well presented paper. - New connection between the theory for Markov Chain and generalization of learning algorithms. - First (to my knowledge) uniform in time stability bounds. - Extensions and limits are properly discussed. Weaknesses: N/A Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: - On the fact that most of the theory requires $\ell$ to be Lispchitz-continuous: would it be because this work is only considering the 1-Waserstein distance? With the 2-Wasserstein, maybe this technique would be able to handle $L$-smooth loss, what do you think? In particular, could you leverage Thm. 4.2 in that case? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for a careful reading of our paper and finding that our paper is well-written with some new results on the connections between the theory for Markov Chain and generalization of learning algorithms. Thanks for the excellent question. In our paper, most of the theory is derived using the tool of the Wasserstein perturbation method developed in [RS18], and using this particular tool, 1-Wasserstein algorithmic stability bound is obtained. In order to obtain the generalization error bound from the 1-Wasserstein algorithmic stability bound, one has to assume that the surrogate loss is Lipschitz [RRT+16]. However, some of the results in this paper do not rely on the Wasserstein perturbation method [RS18], such as Theorem 4.2. For these results, we are able to obtain algorithmic stability in $p$-Wasserstein for some $p>1$. In particular, with the 2-Wasserstein, we are indeed able to handle smooth loss functions if we also establish uniform L^2 bounds following the approach in [RRT17]; see also [FR21]. We will add these discussions in the revised version of the paper. --- Rebuttal Comment 1.1: Title: Response to authors Comment: Thank you for your answer ! For the sake of the discussion with the other reviewers, let me further detail my opinion on the paper: - I agree with the other reviewers that the strongly convex case is not novel, and that the worse dependency on $\mu$ is a bit problematic. - However, the proof technique is new, as well as the other results, and I believe it can be extended by future works. --- Reply to Comment 1.1.1: Comment: Thank you very much for going over our rebuttal.
Summary: This paper studies the algorithmic stability of SGD in order to bound its expected generalization error when using Lipschitz losses. To do so, they consider Wasserstein stability instead of the standard uniform stability, which is defined using the dual representation of the Wasserstein distance and the representation of the generalization error from [HRS16]. To bound the Wasserstein stability the paper employs Markov chain perturbation theory. The main idea to do so is to follow three steps: (i) showing that the optimizer is geometrically ergodic, which essentially boils down to showing that the Wasserstein distance of the transition kernels can be bounded from above by the distance of the inputs of said kernels; (ii) obtaining a Lyapunov function for the optimizer and the loss, which gives a sense of stability to the system (at least intuitively, as it is a necessary and sufficient condition for stability for certain classes of ODEs); and (iii) bounding the Wasserstein distance between the transition kernels of the optimizer with the same initialization but using two neighbouring datasets. Using this three-step process they start giving a bound for the quadratic case with rate $O(\eta / n)$. Similarly, under strong convexity and some pseudo-Lipschitzness assumption on the gradients they show a bound with rate $O(1/n)$ with an appropriate choice of the learning rate. To deal with non-convex cases, they show that if the loss dissipates with Assumption 3.3 (which seems like a relaxed version of strong convexity), then adding a zero-mean noise with bounded second moment is sufficient to obtain a bound with rate $O(b/n)$, where $b$ is the batch size. This bound of course includes SGLD, but is not restricted to this case. The addition of the noise in the non-convex case was to make sure that the ergodicity condition in step (i) was provable. The paper also studies what happens when noise is not included. To deal with this, still assuming that the loss dissipates with Assumption 3.3 they can bound the squared Wasserstein distance of order 2. This way, they can obtain bounds with rate $O(1/\sqrt{bn} + c)$, where $c$ is some constant. Therefore, the exclusion of noise in the analysis generates an extra bias term. All the bounds up to here are uniform in time, that is, they hold for every step of the optimization process. Fortunately, assuming that the loss $f$ is $\mu$ strongly-ish convex, in the sense that $\langle \nabla f(\theta, x) - \nabla f(\vartheta, x) , \theta - \vartheta \rangle \geq \mu \lVert \theta - \vartheta \rVert^p$ for some $p \in (1,2)$; and that the gradients Lipschitz-ish in the sense that $\lVert \nabla f (\theta, x) - \nabla (\vartheta, y) \rVert \leq K\_1 \lVert \theta - \vartheta \rVert^{p / 2} + K\_2 \lVert x - y \rVert ( \lVert \theta \rVert^{p-1} + \lVert \vartheta \rVert^{p-1} + 1)$; then they can show asymptotic generalization bounds with rate $O(1/n^{1/p})$. Strengths: The paper is very well written and "easy" to follow despite its technical difficulty, especially the proofs. I have to congratulate the authors for making difficult proofs look simple with their explanations. I believe that studying the stability of algorithms in terms of the stability of the final transition kernels in terms of the Wasserstein distance is novel (although similar bounds on the generalization error are given in [RGBTS21, Theorem 2] in a different context). This is certainly interesting as not only gives some new results but also new strategies to tackle the problem. These new strategies can be useful to build upon for other researchers in the future. The bounds for the strongly-convex functions and those for non-convex losses with additive noise pass the "eye test", as they achieve the expected rates with $n$, although some of the other parameters are a bit unclear (see weaknesses). The bounds for non-convex losses with additive noise with bounded variance are interesting, as they hold for noises other than Gaussian. This can be helpful either (i) to construct SGLD-like algorithms that work better in practice or (ii) to study more general situations in which one could show that the gradient noise in GD actually behaves like an additive noise with bounded variance (see e.g. [WHXHBZ20]). **Additional references** Borja Rodríguez-Gálvez, Germán Bassi, Ragnar Thobaben, and Mikael Skoglund. "Tighter expected generalization error bounds via Wasserstein distance". NeurIPS 2021. Jingfeng Wu, Wenqing Hu, Haoyi Xiong, Jun Huan, Vladimir Braverman, and Zhanxing Zhu. "On the Noisy Gradient Descent that Generalizes as SGD". ICML 2020. Weaknesses: * A criticism of the paper could be that the results from Section 3 recover the rate of previous results with respect to $n$, while having sometimes worse rate with other parameters. This is actually not very important in my opinion, as I value more the new techniques and the fact that this new analysis can be employed in different settings to achieve the desired rate. * Something more concerning to me are some of the assumptions and parameters in the results. These are sometimes hard for me to interpret and contextualize. For example: * Assumption 3.1. The authors explain that some problems such as GLMs satisfy this condition [Bac14]. Which other problems satisfy the condition? For example, the Lipschitz constant of gradients in neural networks can be very large (or even infinite in some situations), yielding bounds that depend on it essentially vacuous. Is this similar to this pseudo-Lipschitzness condition? How does it behave in familiar scenarios? This can be accompanied by a simplification of Theorem 3.2 in some familiar settings. * Assumption 3.3. This assumption seems to me as essentially a strong-convexity assumption with some slack. While this makes the problem more general than a strongly convex problem, it is unclear to me how much more general it is. I would have liked more intuition and explanation around this assumption other than just saying that is common in stochastic analysis [RRT17, GGZZ22] and that has been considered in neural network settings [AS22]. Could you give some examples of familiar settings where this assumption holds, and give an intuitive explanation of how much more general than strong convexity this is? * Theorem 3.3. Some of the elements such as the parameter $M$ and $\bar{\eta}$ are hard to into context and in general the bound is hard to interpret. Even though the authors make an effort to clarify that in Remark 3.4 and Corollary 3.5, it is still difficult to understand well the result in terms of these parameters. * Theorem 4.1. How big is the bias term in certain familiar scenarios? Could it be that his term is small enough for us not to care in some settings? * Assumptions 4.1 and 4.2. Could you please give some examples of losses and settings where these assumptions are satisfied? This could help contextualize the results. While I already like the paper and its ideas as it is, without a better understanding of these issues, it is hard for me to assess the actual contribution of the presented results. I am happy to increase my score if they are addressed. Technical Quality: 3 good Clarity: 3 good Questions for Authors: My questions mainly concern the assumptions employed throughout the text and some of the parameters appearing in the results. These questions are mainly outlined in the weaknesses, but to summarize: * Could you give more intuitions on the assumptions that are employed (e.g. how much more general Assumption 3.3. is with respect to strong convexity)? * Could you give examples of losses and situations where these assumptions are satisfied so we can better contextualize the results? * Similarly, could you give some examples of the parameters (or some order of magnitude empirically maybe) that appear in the results? Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: Yes, the authors discuss some of the limitations of the paper through the text. Some of the limitations regarding other assumptions could be clarified in the text. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your time invested in our paper. Below are our responses to your questions: **Weaknesses**: 1. Weaker result with respect to other parameters: We get an optimal rate with respect to the number of samples. But dependence on other parameters might not be optimal in some results, for instance dependency to the strong convexity constant. However, our result is more general with respect to existing SGD stability results. Our bound on expected generalization error applies to an entire class of Lipschitz loss functions, not limited to a single one. However, the existing stability results for SGD that we are aware of hold typically for a single loss function at a time and do not apply to a class of functions. Also, as the referee mentioned, the value of our analysis technique is that it can be employed in different settings. 2. Assumption 3.1 : Assumption 3.1 is more general than the smoothness (Lipschitzness of the gradients of $f$) assumption. In fact, for $K_2 =0$, the assumption becomes a smoothness assumption on $f$ in parameter $\theta$. Substituting $K_2=0$ in our results provides us the bound for smooth $f$. For example, regularized logistic regression with bounded data will fall into the $K_2 = 0$ case. We will add a remark about this during the revision. We conclude that, allowing $K_2 >0$ in Assumption 3.1. allows us to cover a larger class. Therefore, we interpret Assumption 3.1 as a more general version of smoothness assumption. As a concrete example, consider the loss $f(x,\theta) = | \theta^T x - b|^p$ with $p\in (1,2)$ which arises in robust regression. The gradient of $f$ with respect to $\theta$ is Holder with Holder constant $p-1 \in (0,1)$, and is not Lipschitz but it will be pseudo-Lipschitz. We agree with the referee that Lipschitz constants in neural networks can be very large (assuming the norms of the weights are bounded), in this case our constants $K_1$ and $K_2$ in Assumption 3.1 will also be unfortunately large. However, if weight regularization is used; i.e. if a quadratic penalty term is added to the training loss then by a Lagrangian reformulation, this will be equivalent to training with norm constraints on the weights, and this will allow us to control the Lipschitz constants. 3. Assumption 3.3: The class of dissipative functions we consider are the ones that admit some gradient growth in radial directions outside a compact set. Inside the compact set though, they can have quite general non-convexity patterns. As concrete examples, they include certain one-hidden-layer neural networks (arXiv:2205.14818), they arise in non-convex formulations of classification problems; for instance in logistic regression with a sigmoid/non-convex link function. They can also arise in robust regression problems, see e.g. [X Gao, M Gürbüzbalaban, L Zhu - Operations Research, 2022]. Also, any function $f$ that is strongly convex outside of a ball of radius $R$ for some $R >0 $ will satisfy this assumption. Consequently, regularized regression problems where the loss is a strongly convex quadratic plus a smooth penalty that grows slower than a quadratic will belong to this class; a concrete example would be smoothed Lasso regression. Many other examples are also given in arxiv:2007.11612v3). Dissipative functions also arise frequently in the sampling and Bayesian learning literature, for instance in the study stochastic gradient Langevin dynamics (SGLD), this assumption is used to show that a stationary distribution exists (see e.g. arxiv:1702.03849) 4. Theorem 3.3 : Thanks for the comment. In the revised version, we will add further explanations to discuss these parameters and how they affect the main results of the paper. For example, the parameter $\bar{\eta}$ appears in the upper bound in equation (3.9) that controls the 1-Wasserstein algorithmic stability of the SGD. It is easy to see from equation (3.9) that the smaller $\bar{\eta}$, the smaller the 1-Wasserstein bound. By the definition of $\bar{\eta}$, the larger $\hat{\eta}$, the smaller the 1-Wasserstein bound. As a result, we would like to choose $\hat{\eta}$ to be as large as possible, and the equation (3.10) provides an explicit value that $\hat{\eta}$ can take (which is already as large as possible from our theory). The parameter $M$ can be interpreted in a similar way. We will add more discussions and intuitive explanations in the revised version of the paper. 5. Theorem 4.1 : If the loss is close to being strongly convex, then the bias term $K/m$ will be small. For example, consider a loss which is $\mu$-strongly convex outside a ball of radius $R$, and inside this ball of radius $R$, it can be non-convex with a bounded Hessian. As the size of the ball radius $R$ gets smaller, we have $K\to 0$ and $\mu\to m$ so that $K/m$ will go to zero as $R$ to zero. A simple toy example to visualize this would be the double-well example (see Example 1 in [X Gao, M Gürbüzbalaban, L Zhu - Operations Research, 2022]). That being said, we admit that such losses are not common in the statistical learning literature and by simply adding Gaussian noise to the iterations, we can analyze more general non-convex losses as we detail in Section 3.3. 6. Assumptions 4.1, 4.2 : Even though Assumptions 4.1 and 4.2 have been used in prior theoretical studies as we cited in the paper, after your question, we now realized that unfortunately they did not provide any concrete examples that satisfy these assumptions. Nevertheless, as an example, we suspect that the loss f(\theta, x) = |a^T \theta - b|^p satisfies the assumption where $p \in (1,2)$ and $x=(a,b)$ is the input-output pair. We could not formally prove this in the limited rebuttal period; however, we will try our best to come up with concrete examples for these assumptions. We agree with you that such examples would play an important role in terms of appreciation of the results, and we would like to thank you for bringing this to our attention. --- Rebuttal Comment 1.1: Title: Answer to rebuttal Comment: Thank you for your rebuttal. I liked that you went over all my questions and comments, especially regarding interpretation and examples. Please, if possible, make sure to incorporate all of them into the main text of the paper. I think it improves it. Also, if you end up proving that $| a^T \theta - b|^p$ satisfies assumptions 4.1, 4.2 for $p \in (1,2)$, please respond to this answer as I am curious to see that.
Summary: In this paper, the authors studied an approach to obtain generalization bounds on SGD algorithm under different class of objective functions such as "quadratic strongly convex", "smooth strongly convex" and one subclass of non-necessarily convex functions described in Assumption 3.3. Interestingly they convey the message that, as in the strongly convex settings, they can provide generalization bounds in the non convex setting as soon as they add some additive noise to the SGD update. Their approach is based on stability analysis, together with convergence properties of Markov chains. Strengths: - Mixing stability and Markov chain perturbation theory is a very interesting approach that I had never seen before. The technique might be reused in other settings. - The trick of adding noise to create generalization and make it work thanks to Markov chain theory is quite beautiful. Weaknesses: - Authors claim "non convex" everywhere without more precision. I first thought of conditions like hypoconvexity. Some people study smooth optimization without any other assumption. Here the setting is very specific and authors should be more precise about it in introduction. One cannot claim solving problems in "non-convex" setting just as soon as its class contains one non-convex function. - It is actually quite natural to believe that adding noise reducing generalization error as the dependency in data is reduced (same argument for differential privacy). But we can do it with an algorithm that simply returns a constant independently of the training data. The interesting thing to do is to reduce generalization error while preserving good optimization properties. In this paper, the authors are degrading optimization performance as a constant step-size eta leads to convergence to neighborhood of the solution, and adding noise leads to a larger size of this neighborhood. This brings two questions: - Can we generalize this analysis to decreasing step size as the ergocity would not be geometric anymore? - In case of constant step-size, can authors discuss the values of « optimization error + generalization error » terms depending on the additive noise variance? - l.291: « This indicates that it is necessary to add additional noise » -> No! While this section can serve as a good intuition of why additive noise what essential, this is not a clear proof of this necessity. Authors showed they found some weaker upper bound without additive noise. They did not show one cannot find a tighter one. - Notation suggestions: - The word « algorithm » is never defined before being used in Definition 2.5. Since in definition 2.1, $X$ and $\hat{X}$ are not stochastic (instead we optimize on their values), it is even more important to stress here that an algorithm is indeed stochastic and that the expectation here is taken over this source of randomness conditioned by the values of $X$, $\hat{X}$ and $z$. However, the notation $\mathcal{A}: \bigcup_{n=1}^{\infty}\mathcal{X}^n \rightarrow \mathbb{R}^d$ means that $\mathcal{A}$ is deterministic. So either $\mathcal{A}$ returns a random variable, or it takes an additional random variable as input. - In 2.5, I find weird the fact to associate $\nu$ and $\hat{\nu}$ to $X$ and $\hat{X}$ as if those two datasets were particular, while we optimize over all the possible datasets. So $\nu$ and $\hat{\nu}$ vary also. I would rather defined a systematic mapping $\nu: X \rightarrow \nu_X$ so that this distribution is defined for all $X$. Or the variables over which the supremum is computed should be $\nu$ and $\hat{\nu}$ that are neighbors to each other. In the latter case, this should come with the definition that two distributions are neighbors iif there exist two neighbor datasets they are generated from. Minor / typos: l.133: « $R(\omega)$ » -> $R(\theta)$ l.229: I guess $\xi_k$ is also independent with $\theta_{k-1}$ and $\Omega_k$? Maybe specify it. l252: « our result in » -> our result is l237: « Thereom » -> Theorem Technical Quality: 3 good Clarity: 3 good Questions for Authors: - Just to be sure I understood the proof correctly, can authors confirm the only place the neighborhood assumption between datasets is used is in the expression of $\gamma$ in equation 2.9. If this is the case, I think this little intuition should be explicit in the paper. - I am surprised to see in lemma 3.3 that the bound on $\gamma$ does not depend on the batch size. Indeed, there is a certain probability that the batch does not contain the data that differs from one dataset to the other, in which case the step $P$ and $\hat{P}$ are identical. Then, the error is 0. The average error should be the error in the case that the different data is in the batch, times this probability, the latter being proportional to the batch size. - The dependency on $\sigma^2$ is unclear. In Th D1, the bound seems increasing in $\sigma^2$, then removing the additive noise seems hurtless. However, looking closer, one notices that $\psi$ depends on $K_0$ that itself depends on $\sigma^2$. Why cannot we still consider $\sigma^2=0$? The only thing we have to verify is that $\psi$ will not explode, i.e. $K_0>0$. But $K_0$'s expression is a complicated mix between many other variables. Hence it is hard to understand what happens when $\sigma^2=0$. Can you elucidate this question? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Not much except maybe it applies to algorithms that contracts, i.e. SGD with constant step-size that cannot converge to the optimum. It would be nice to handle decreasing step-sizes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their careful reading of our paper. Below are responses to your questions: **Weaknesses**: * As the referee pointed out, our analysis applies to non-convex losses that satisfy some conditions (a dissipativity condition and a pseudo-Lipschitz gradient growth condition). As suggested, we will clarify this in our introduction further, emphasizing the assumptions we make. * Our approach mainly relies on the Wasserstein perturbation method developed in [RS18], which requires that the Markov chain is time-homogeneous. When the step-size is decreasing, the associated Markov chain is no longer time-homogeneous and thus [RS18] is no longer applicable. Therefore, the regime with decreasing step-size is out of the scope of the current paper. However, we do believe that it is possible to extend the analysis of [RS18] to deal with time-inhomogeneous Markov chains. But that will require us to write a paper that first extends the analysis of [RS18] and then apply it to the context of decreasing step-size. That will be left as a future research direction which will be interesting to pursue. * In [RRT17], the authors provided analysis for the optimization error depending on the additive noise variance, later on these results were extended to different settings. We can directly combine our results with their results to obtain an optimization error + generalization error bound. We will mention this in the next version of the paper. * In line 291, what we meant to say was that for our proof technique to work well, adding noise was beneficial. We absolutely agree with the referee that this does not mean that additive noise is a must for an algorithm to perform/generalize better. * Thanks for the suggestions on the notations. The notation $\mathcal{A}$ does not mean that it is deterministic, even though it is a function of the dataset, which is deterministic. The extra randomness comes from the randomness in the SGD, not the dataset. We will make this more transparent in the revised version of the paper. Indeed, the notation $\nu_{X}$, $\nu_{\hat{X}}$ might be better than the notation $\nu$ and $\hat{\nu}$ that we currently use. However, since the upper bound we eventually obtain does not rely on the particular dependence on $X$ and $\hat{X}$, but only on the fact that they differ by only one element and data points are bounded with radius $D$, so that there are factors $1/n$ and $D$ that appear in the bound, independent of $X$ and $\hat{X}$, our notations as they currently are will not cause a misunderstanding. But we do agree that more formally, it is better to use the notation $\nu_{X}$, $\nu_{\hat{X}}$. We will fix the issues mentioned in the minor comments, indeed these are typographical errors. **Questions:** * Thanks for this excellent observation. Indeed, the only place the neighborhood assumption between datasets is used is in the expression of $\gamma$ in equation (2.9). Lemma 2.1 ([RS18], Theorem 3.1) relies on three conditions, the Wasserstein construction in (2.7), the drift condition for the Lyapunov function in (2.8) and finally the estimate on $\gamma$ in equation (2.9) which is about the one-step 1-Wasserstein distance between two semi-groups that in our context are associated with two datasets that differ by at most one element. As a result, the only place the neighborhood assumption between datasets is used is in the expression of $\gamma$ in equation (2.9). We will add some discussions to make this little intuition more explicit in the revised version of the paper. * Thanks for the comment. Indeed, the probability you mentioned is proportional to the batch-size $b$. However, do not forget that there is the term $\eta/b$ in (B.9) and (B.10) in the proof of Lemma 3.3., and here $b$ comes from the definition of the SGD, where the factor $1/b$ helps the stochastic gradient become an unbiased estimator of the full gradient. Combining these two effects, you will obtain a bound on $\gamma$ which is independent of the batch-size $b$. * Indeed, it is very natural to ask if one can let $\sigma^{2}=0$ so that one can remove the additional noise added to the SGD in the case of non-convex loss functions. Unfortunately, the answer is no. Note that for the sake of notational convenience, we let $\sigma^2=\mathbb{E}\Vert\xi_1\Vert^{2}$ because the term $\mathbb{E}\Vert\xi_1\Vert^{2}$ appears many times in the bounds that we derived and it is convenient to use the notation $\sigma^{2}$. On the other hand, the probability density function $p(x)$ of $\xi_{1}$ implicitly depends on $\sigma^{2}$. If you let $\sigma^{2}\rightarrow 0$, $p(x)$ will converge to the Dirac delta distribution concentrated at $x=0$. When $\sigma^{2}\rightarrow 0$, there will be no mixing since $p(x)$ will converge to the Dirac delta distribution concentrated at $x=0$ such that in equation (3.8), $\hat{\eta}$ will have to go to zero. If $\hat{\eta}$ goes to zero, by the definition of $\bar{\eta}$, we will have $\bar{\eta}$ goes to $1$, which will make the upper bound in equation (3.9) explode. Therefore, to conclude, we cannot let $\sigma^{2}=0$ because otherwise, the upper bound in equation (3.9) will diverge and become a trivial upper bound. --- Rebuttal Comment 1.1: Title: Answer to rebuttal Comment: First of all, I thank the authors for their detailed response. My main concern remains the combination of optimization and generalization bounds. SGD with constant step-size does not converge, and the final error depends on the noise. Adding an artificial noise increases this error. And authors decreases the generalization error. But it is quite natural to understand that a more random algorithm generalizes better. We need to compare the sum of those 2 bounds to conclude on the result. Authors say "We can directly combine our results with their results to obtain an optimization error + generalization error bound. We will mention this in the next version of the paper.". I believe the authors that we can do it easily, and I am glad authors consider doing it in the revised version. Can they quickly do it here by answering this message? I think this is an important point! Except this, I mostly like this paper due this novel approach, whereas it indeed is not a good advertisement for the technique to have some worse results than the known ones in "simple" cases, as pointed by several reviewers. A couple of minor remarks: - "In line 291, what we meant to say was that for our proof technique to work well, adding noise was beneficial. We absolutely agree with the referee that this does not mean that additive noise is a must for an algorithm to perform/generalize better.": I understood you point. Mine is just to say that it should be phrased differently. - "The notation A does not mean that it is deterministic, even though it is a function of the dataset, which is deterministic. The extra randomness comes from the randomness in the SGD, not the dataset.". Yes it does! A function is by definition deterministic. It must return the same output each time it receives the same input. The notion of randomness necessitate the definition of measured spaces and a function between them called random variable. When we talk about a "random function", we generally refer to as a random variable which outputs a function, i.e. its arrival space is a measured space of functions. So we can either write "there are lots of functions $A_X: dataset \rightarrow R^d$, and we draw X at random, or equivalently, we can define a single function and place $X$ as second input, i.e. $A(X, dataset) \in R^d$, or say that $A$ takes a dataset in input and returns a random variable that itself returns a point in $R^d$. It only depends which signature you prefer for A, and I guess the most natural one is $A: (dataset, random sampling) \rightarrow R^d$. But this random sampling must be explicit in the signature of A, otherwise A is meant to be deterministic. In conclusion, it would be great if authors could - acknowledge they will rephrase l.291 (it was not clear in their rebuttal) - acknowledge they will modify the notation for the signature of A and the $\nu$ that were discussed in the same point of their rebuttal. - provide here a complete optimization + generalization bound in the non convex case to compare with previously known ones. Best regards. --- Reply to Comment 1.1.1: Comment: Thank you very much going over our rebuttal. We do acknowledge that we will rephrase l.291 and modify the notations and definitions for $A$ and $\nu$. Regarding the last point, we can give the following example. Recently, there have been multiple results on optimizing non-convex function using stochastic gradient Langevin dynamics. We will state the optimization error result from the paper “Global Convergence of Langevin Dynamics Based Algorithms for Nonconvex Optimization, Xu et al.” In theorem 3.3 of the above mentioned work, it was shown that gradient Langevin dynamics has the following optimization error after K iterations, $$ \text{Optimization error}(K) \leq \Theta e^{-\lambda K \eta} + \frac{C_{\psi}\eta}{\beta} + R_M$$ where $\lambda, \Theta, C_{\psi} \beta$ and $R_M$ are problem dependent parameter described in the paper. Similar results are given for stochastic gradient Langevin dynamics in Theorem 3.6. A simplified version of the results for GLD and SGLD are presented in Corollary 3.4 and 3.7 respectively. According to Corollary 3.4, the optimization error for gradient Langevin is of $O(\varepsilon)$ after $K = O(d \varepsilon ^{-1} \lambda^{-1} \log 1/\varepsilon)$ where $\lambda$ is the uniform spectral gap for continuous-time Markov process generated by Langevin dynamics. According to Corollary 3.7, the optimization error is of $O(\varepsilon + d^{3/2}B^{-1/4} \lambda^{-1} \log 1/\varepsilon )$ after $K = O(d \varepsilon ^{-1} \lambda^{-1} \log 1/\varepsilon)$ where B is the mini-batch size of Algorithm. Combining these two results will directly give the optimization+generalization performance of SGLD on a non-convex function with dissipative loss. We will add this example in a more formal way to the appendix in the next version. We hope that this clarifies the concern.
Summary: This paper studies the generalization bounds of (noisy) SGD via Wasserstein stability. The paper presents a unified guideline to derive the Wasserstein stability for stochastic optimization with a constant step size, which allows to derive stability bounds with a three-step proof technique: showing the optimizer is geometrically ergodic, obtaining a Lyapunov function for the optimizer and the loss, and bounding the discrepancy between the Markov transition kernels associated with the chains. With this guideline, the paper derives stability bounds for SGD with strongly convex problems, nonconvex problems and a class between convex and strongly convex functions. The paper also develops stability bounds for noisy SGD in a nonconvex case. Strengths: The paper presents a new perspective to derive stability bounds by techniques in applied probability. This connection between machine learning theory and applied probability is interesting and can have potential applications to various optimization algorithms. The paper develops time-uniform stability bounds, meaning that the stability bounds would not increase to infinity as the number of iterations goes to infinity. The analysis also applies to general additive noise, which extends the existing analysis developed for Gaussian noise. The paper also develops stability bounds for the standard SGD in the nonconvex case, although a dissipativity assumption is required. Weaknesses: The analysis for SGD in Section 3.3 is a bit complicated. For example, the choice of $\hat{\eta}$ in Eq (3.10) is too complex. The result is not quite intuitive and it is not clear how we can use this bound to explain the behavior of SGD. The bound in Theorem 3.2 is of the order larger than $O(1/(n\mu^5))$. This bound has a crude dependency on $\mu$. Since $\mu$ is often very small in practice, the bound is worse than the existing bound of the order $O(1/(n\mu))$ (HRS16). The bound in Theorem 4.1 is vacuous due to the term $K/m$. From this stability bound, we cannot get meaningful generalization bounds. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Lemma 2.1 requires to build a Lyapunov function. The paper choose the function $V(\theta)=1+|\theta-\theta_*|^2$. Can you explain why this is a good choice of Lyapunov function? What is the intuition behind it? What is the basic principle to build a Lyapunov function? The paper considers noisy SGD for nonconvex case. Can you explain intuitively why noise is needed in this case? What is the benefit of adding noise in the nonconvex case? **Minor comments**: Eq (2.6): should $\mathbb{P}(\theta_{n-1},A)$ be $P(\theta_{n-1},A)$? Eq (2.8): the meaning of $\hat{P}\hat{V}$ is not given. The meaning is given in the appendix. Below Assumption A2: "that that" Eq (D.21): $\theta - \hat{\theta}_*+...$ should be $ \theta - \hat{\theta}_*-...$. The same change should be also made in Eq (D.22) Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: I do not see negative societal impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their careful reading of our paper. Below are responses to your questions: **Weaknesses:** * We will add an intuitive explanation behind formula (3.10) and its implications for SGD. The parameter $\hat{\eta}$ appears in the definition of $\bar{\eta}$ that appears in the upper bound in equation (3.9) that controls the 1-Wasserstein algorithmic stability of the SGD. It is easy to see from equation (3.9) that the smaller $\bar{\eta}$, the smaller the 1-Wasserstein bound. By the definition of $\bar{\eta}$, the larger $\hat{\eta}$, the smaller the 1-Wasserstein bound. As a result, we would like to choose $\hat{\eta}$ to be as large as possible, and the equation (3.10) provides an explicit value that $\hat{\eta}$ can take (which is already as large as possible from our theory). We will add more discussions and intuitive explanations in the revised version of the paper. * Our bound in Theorem 3.2 for the strongly convex case $O(\frac{1}{n\mu^5})$ is indeed worse than the $O(\frac{1}{n\mu})$ bound given in [HRS16] in terms of its dependency to the strong convexity constant $\mu$. That being said, as we mentioned in the paper, the main novelty of the results for the strongly convex case is *not* the bound itself we derived, but *how* we obtained it. Here, our novelty is the introduction of the proof technique, which we believe is very novel and flexible: can be applied strongly convex / non-convex (dissipative) losses, and it can be rather easily extended to other optimization algorithms like SGD-momentum – will hopefully form a fertile ground for further articles. * The bound in Theorem 4.1 is indeed not tight due to the persistent $K/m$ term that does not vanish to zero as the number of samples $n$ increases. However, the bound is **not vacuous** unless $n$ is very large: depending on the value of $K$ (the measure of nonconvexity), the bound can be still informative for a nonasymptotic $n$. On the other hand, the existing algorithmic stability bounds (cited in the paper) for similar non-convex problems typically increase with the number of iterations and become infinite and actually vacuous for any range of $n$ as the number of iterations increases. Compared to those bounds, our bound does not increase with the number of iterations, which we believe is a significant improvement. **Questions:** * Often, the Lyapunov function is required to be bounded from below by a positive constant, and some mathematics literature for simplicity requires $V\geq 1$. In order to create a positive function that is greater than $1$, the simplest function one can come up with is a quadratic plus $1$. The quadratic choice for the Lyapunov function is quite standard for SGD and related algorithms and is usually the first one to try. On the other hand, in order to use the strong-convexity and dissipativity conditions, it is easier to work with $|\theta-\theta_{\ast}|^{2}$ instead of $|\theta|^{2}$. The basic principle of building a Lyapunov function is to show a drift condition can be satisfied so that the expected value of the Lyapunov function can be uniformly bounded in time so that it will not grow as time increases. That being said, there is no principled guideline to design a Lyapunov function for an arbitrary optimization algorithm and objective function; however, for popular algorithms and problems classes several Lyapunov functions have already been developed as we mentioned in Line 100. Note that in optimization, to show convergence, we usually need the Lyapunov function to go to zero as iteration number increases; here because we do not need to show convergence to a point, it suffices that Lyapunov function over the iterations stays as a constant. * The paper considers noisy SGD for the nonconvex case. The noise is useful in this case because in order to apply Wasserstein perturbation theory [RS18]: we need contraction in Wasserstein distance (geometric ergodicity). Informally, in the nonconvex case, the Markov chain associated with SGD might not be geometrically ergodic if the only source of randomness comes from minibatching. Adding noise in this case enables the Markov chain to access all the regions of the state-space with non-zero probability and makes the chain geometrically ergodic. Formally, in the strongly-convex case, one can easily use the synchronous coupling method to obtain Wasserstein contraction such that no additional noise is needed. However, in the non-convex case, under a more general dissipativity condition, Wasserstein contraction can not be obtained by using the synchronous coupling method. Instead, in order to obtain Wasserstein contraction (geometric ergodicity), we apply the theory developed in [HM11] which says that as long as the Markov chain satisfies a drift condition (Assumption A.1) that relies on the construction of an appropriate Lyapunov function and a minorization condition (Assumption A.2), the Markov chain would converge in some weighted total variation distance that can imply convergence in 1-Wasserstein. The dissipativity condition helps us to show that we can indeed construct a Lyapunov function that satisfies Assumption A.1. However, a minorization condition (Assumption A.2.) requires a certain mixing property that requires some continuity-type assumptions on the noise structure, and this is where we need the additional noise (since the noise from the mini-batch has discrete nature and is not sufficient to obtain mixing property, i.e the minorization property.) ** Minor comments ** We will fix these typographical errors, and revise along your suggestions. Thanks for the feedback. --- Rebuttal Comment 1.1: Comment: Thank you for the point-to-point response. While the results for the strongly convex cases are not tight, I appreciate the novelty of the analysis. One thing I am still not quite clear is about the dependency on $K/M$. The authors mention that the bounds involving this term are not vacuous. These terms are involved in Assumption 3.3. It seems that this assumption is not quite related to the sample size, and therefore it is not clear to me how this term is not vacuous. --- Reply to Comment 1.1.1: Comment: Thank you very much for going over our rebuttal. Regarding the $K/m$ term, we thought the concern was about this term does not go to zero as the sample size increases: hence the bound becomes vacuous when $n \to \infty$ (please let us know if we misunderstood your concern). What we tried to convey in our response was, we agree with you that the bound is not tight for large $n$ for the reason we mentioned above -- it indeed becomes vacuous **when $n$ goes to infinity**. However, when $n$ is not large, we believe that the bound can still be informative. More precisely, as the bound is essentially $$C/n + K/m $$ for **not large** $n$, the first term $C/n$ can dominate the $K/m$ term. Hence in such case, we stil obtain a meaningful bound, which **does not** explode as the number of iterations goes to infinity (as opposed to all existing bounds to our knowledge). We will mention this explicitly in the new version. We hope that this clarifies the concern. We would be happy to respond if there is any other questions.
null
NeurIPS_2023_submissions_huggingface
2,023
Summary: The paper derives Wasserstein stability bounds for a variety of cases for a "surrogate loss" under convex/non-convex settings. Strengths: The problem is interesting. The bounds in the non-convex case can be useful. The paper is well-written. Weaknesses: I am not sure how novel the convex part is, also noted by authors. This part seems to remove the projection step, however, also done for a surrogate loss, so I am not sure if these are comparable. Also for the non-convex case, it is hard to see the relationship w.r.t. existing results due to the notion of surrogate loss. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: I have the following questions. One thing I did not understand is the notion of the surrogate loss (as noted). The relationship between the original cost function $f$ and $\ell$ is never clarified (I checked the appendix and no definition of this). It seems that the whole paper is written for $\ell$, this is quite confusing. Why would authors not assume $f$ is $L$-Lipschitz and derive everything for $f$ under this restrictive assumption? This should be made clear as there is no way for a reader to understand what is going on in this paper, fundamentally, if the notion of the surrogate and its relationship to $f$ is not made clear at the beginning of this work. Note also that in Definition 2.1, it reads like the paper cited (HRS16) defines the stability for a surrogate loss, whereas there is no mention of surrogate loss in HRS16. Upon checking a few related work which uses surrogate loss notion, it seems that the convergence in function value may not happen here. This of course makes the comparison impossible with relevant results in the literature. Does any surrogate loss work for these results? Do the results hold for a particular selection of losses, e.g., sometimes depends on $p$ as in earlier works? Please clarify. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: Theory work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their time invested in our paper. As far as we can understand, the reviewer has two main concerns: 1) novelty of the strongly convex part and 2) the use of the surrogate loss function. Below, we clarify both of these points and we hope that the reviewer could reconsider their score based on our explanations. * Novelty of the strongly convex part: As we mentioned in the paper (and also as the reviewer acknowledged as well), the main novelty of the results in the strongly convex part is **not** the bound itself we derived, but **how** we obtained it. Here, our novelty is the introduction of the proof technique, which we believe is novel and very flexible: can be applied strongly convex / non-convex (dissipative) losses, and it can be rather easily extended to other optimization algorithms like SGD-momentum – will hopefully form a fertile ground for further articles. We hope that the reviewer could reposition our contributions from this perspective. * The use of the surrogate losses We agree that the requirement of surrogate losses is a drawback of our framework. However, based on the reviewer’s concerns, we suspect that the use of surrogate losses might seem more daunting than it actually is. We will now clarify why we need surrogate losses, and we will provide an example, which we believe is a very natural case and has been used in prior studies. *The need for surrogate losses:* As our bounds are based on 1-Wasserstein distance, we need the surrogate loss $\ell$ to be a Lipschitz continuous function. On the other hand, for the original loss $f$ we need some sort of convexity (e.g., strongly convex, convex, or dissipative) and we need the gradient of $f$ to be Lipschitz continuous. Unfortunately, under these assumptions, we cannot further impose $f$ **itself** to be Lipschitz because there is no function that satisfies these assumptions. Hence, the requirement for the surrogate losses. *Example for surrogate losses:* Example 1: We can choose the surrogate loss as the *truncated loss*, such that: $$\ell(\theta,x) = \min(f(\theta,x) , C) $$ where $C>0$ is a chosen constant. This can be seen as a “robust” version of the original loss, which has been widely used in robust optimization and (only) conceptually linked to adding a projection step to the optimizer. Example 2: Another natural setup for our framework is the $l_2$ regularized Lipschitz loss that was also used in [FR2021] (see Section 3 in their paper). As opposed to the previous case, for the sake of this example let us consider $\ell$ as the true loss and $f$ as the surrogate loss. Then, we can choose the pair $f$ and $\ell$ as follows: $$ f(\theta,x) = \ell(\theta,x) + \lambda ||\theta||_2^2 $$ where $\lambda >0$. Intuitively, this setting means that, we have a true loss $\ell$ which can be Lipschitz, but in the optimization framework we consider a regularized version of the loss – which has also been considered in various studies, the closest to us being [FR2021]. We sincerely appreciate that the reviewer has gone over the literature for surrogate losses. We agree this part is a little bit overlooked by assuming the reader might be familiar with the concepts. In the next version, we will provide further explanation and examples about surrogate losses, as we noted above. We will also rephrase the part where we introduced the concept of algorithmic stability, as the reviewer suggested. If there are further questions, we remain at your disposal. – References: [FR2021] Tyler Farghly and Patrick Rebeschini. Time-independent generalization bounds for 364 SGLD in non-convex settings. In Advances in Neural Information Processing Systems, 365 volume 34, pages 19836–19846, 2021. --- Rebuttal Comment 1.1: Comment: Thanks for your response. I think however the main difficulty of obtaining such results in the literature is precisely lies on the technical challenge of using the original cost function to measure the performance, whereas, again as all results are stated with a vague notion of surrogate loss here, I do not think there is any comparison to the rest of the literature (I'm surprised, also, that other reviewers commented on the results as if they apply to the standard function value setting). Essentially, while you are running the algorithm using the gradients of $f(\theta, x)$, you are _measuring_ the performance w.r.t. to an arbitrary loss (since it's not specified here) $\ell$. Naturally $\min_{\theta} \ell(\theta)$ could arbitrarily differ from the true minimum. For me, this makes it impossible to assess the results here. Is there a way to assume $f$ and $\ell$ are "close" in some way, which can then be used in the error bounds as an extra term? I think this really needs clarification for the reader -- if the results are interpreted without this observation, then it may not even serve the community positively. --- Reply to Comment 1.1.1: Comment: Thank you for going over our rebuttal. We shall reiterate that there are various generalization bounds that have to rely on surrogate losses for different technical reasons. For recent examples, we kindly request you to check the settings in [FR2021, RZGS23, RBG+23]. Using the original loss in the analysis is usually not the only challenge -- or not the main challenge in general (yet we agree it is indeed a challenge); there might be several other important challenges that need to be solved, and solutions to such challenges can still be valuable to the community. On the other hand, sometimes it is even more desirable to use surrogate losses. For example, consider the classification task where the loss function we are interested in is a 0-1 loss function which is non-smooth and non-convex in nature. Hence, one runs gradient descent on (squared) hinge loss or logistic loss to better optimize the model. But eventually, the final risk bound is obtained in terms of 0-1 loss function risk and we are not interested in getting the risk bound on logistic loss or(squared) hinge loss. For the details, please refer to Bartlett et al. (2006). That being said, we agree with you that the setting might seem arbitrary for a reader who is not accustomed to surrogate losses. As you mentioned, in all the examples we have provided, the two losses are "close" to each other in certain senses. For instance in Example 1 we gave in the first response, as $C \to \infty$ the losses match. For Example 2, the two losses match as $\lambda \to 0$. We can definitely include an additional error term in the bounds coming from the use of surrogate losses, as you suggested. Essentially this will include a term in the following form: $$ | \mathbb{E}_{\theta, x}[f(\theta, x) - \ell(\theta,x) ] | $$ which measures how much the surrogate loss deviates from the original loss **on average**, and this can be a small value depending on the distribution of the algorithm output. We agree that this would increase the clarity of the presentation, we will add this in the next version. Thank you for the suggestion. We hope that this would address the concern and we remain at your disposal if there is any more questions. (1). Bartlett, Peter L., Michael I. Jordan, and Jon D. McAuliffe. ``Convexity, classification, and risk bounds.'' Journal of the American Statistical Association.
null
null
null
null
null
null
Accurate Interpolation for Scattered Data through Hierarchical Residual Refinement
Accept (poster)
Summary: This paper introduces the Hierarchical INTerpolation Network (HINT), a hierarchical framework that leverages the residuals on observed points to guide the estimation of the target function. HINT comprises multiple lightweight interpolation blocks arranged sequentially. The first block estimates the main component of the target function, while subsequent blocks predict the residual components using the residuals from the preceding blocks. By accumulating the main component and residual components, HINT produces the final interpolation results. Experiments demonstrate the effectiveness of HINT, and the results demonstrate HINT can get better performance compared with existing interpolation algorithms across a wide range of datasets. Strengths: 1) This paper is easy to follow. 2) The Ablation Study clearly validates the effectiveness of the proposed auxiliary loss term, masked observed points and local constraint in HINT. Weaknesses: 1) The main concern is the novelty of this paper is not clearly presented. The major contribution of the paper seems to be the proposed hierarchical residual refining framework but its novelty has not been sufficiently addressed. It is worth noting that previous studies, specifically [1] and [22] as mentioned in the last paragraph of the section of Related Work, have already proposed hierarchical residual architectures for time series forecasting. However, the paper does not discuss the challenges in applying the hierarchical residual refining framework to the field of interpolation, regarding that interpolation can also be viewed as prediction from existing observations. It neither emphasizes the distinction between the proposed architecture and existing architectures. As a result, the relation between this paper and existing literature is not clear and the main contribution of the paper cannot be evaluated. 2) The presentation is not clear and should be carefully revised. For example, the positions of the observed points are defined twice in Line 83 and Line 84. The expression “It outputs and outputs the coarse function value predictions at both the observed and target points” in Line 94 is wrong. 3) The experimental results provided in the paper are limited. For instance, the authors did not conduct experiments to investigate the performance of the proposed method with varying ratios of observed points. Therefore, it is hard to evaluate the effectiveness of the proposed method under different data conditions. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: The motivation of the proposed hierarchical residual refining framework is not clear, could you explain the motivation of hierarchical residual refining framework from the perspective of experiments, such as ablation study, or theoretical analysis? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: The limitations of this paper regarding the hyperparameter K in hierarchical local constraints. Flag For Ethics Review: ['No ethics review needed.'] Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ### Response to Reviewer WnAW **Comment 1:** The paper's novelty, especially regarding its hierarchical residual refining framework, isn't clearly articulated, with insufficient distinction from previous works on hierarchical residual architectures. **Response 1:** Thank you for your thoughtful comments and concerns regarding the novelty of our work. We'd like to address these concerns systematically: 1. **Main contributions of our HINT**: 1. Our paper's key contribution is a novel hierarchical residual refining framework for scattered data interpolation. While inspired by N-Beats [22] and N-Hits [1], our model diverges significantly from both them in its application to this specific task (as detailed below). 2. The hierarchical local constrained approach progressively refines residual predictions, leading to reduced residuals and improved interpolation accuracy. 3. Our model has outperformed existing methodologies across all datasets tested. 2. **Differences between scattered data interpolation and time series forecasting**: 1. Data Dimensionality: Time series forecasting is 1D, focusing on modeling long-term dependencies. Scattered data spans 2D or higher, revealing complex spatial relationships. 2. Sparsity: Time series data are densely recorded within time frames, whereas scattered data is often sparse. 3. Ordering: Time series data is consistently ordered over intervals, whereas scattered data is randomly spaced, complicating holistic approaches. 4. Function Priors: Time series often exhibit cyclical and trending patterns, as underscored in [1] and [22]. Scattered data rarely has such clear priors, and when it does, its random distribution complicates application. 3. **Differences between our HINT and N-Beats [22] and N-Hits [1]**: 1. On methodology: Both our method and [1] & [22] use a dual residual structure for signal decomposition. However, while [1] & [22] categorize time series signals into components like cyclical and trending, facilitating prediction, our HINT refines by capturing diminishing residual functions, aligning closer with function approximation. 2. Basic block implementation: The predictive blocks in [1] & [22], tailored for time series, harness its orderly and dense nature, using MLPs to predict signal base weights. Such networks are not ideal for scattered data interpolation. In contrast, our approach uses a modified Transformer encoder, leveraging self-attention to model correlation between observation and target points. 3. On hierarchical approach: [1] decomposes by sampling and interpolating the whole time series signals based on various cycle scales. Our hierarchy, rather than down-sampling scattered data, introduces local constraints during correlation modeling, diverging from [1]'s methods. We hope this clarifies the distinctions between our work and prior art, emphasizing the novelty of our approach. We appreciate the opportunity to elaborate on our contributions and look forward to any further queries or comments you might have. **Comment 2:** Typos at line 83 and 94. **Response 2:** Thanks for pointing out the typos. 1. The end of line 83 will be corrected from "the positions of the observed points" to "the values of the observed points". 2. We'll rectify the redundancy in Line 94. Apologies for any oversight. We'll revisit the manuscript for thoroughness and clarity. **Comment 3:** Lack of experiments to investigate the performance of the proposed method with varying ratios of observed points. **Response 3:** Thank you for raising this valid concern. We concur that evaluating the performance of our method with varying ratios of observed points would enrich our validation process. We have conducted supplementary experiments on the Mathit-2D dataset to gauge the impact of different numbers of observation points on interpolation accuracy, as depicted in the figure in [global response](https://openreview.net/forum?id=8d9wVXri89&noteId=RMsTw2cPZh). These findings further underscore the superiority of our approach. Not only does it excel in terms of average interpolation accuracy, but it also outperforms existing interpolation algorithms across all levels of observed point numbers. These results will be appended to the supplementary materials for further clarity. **Comment 4:** Motivation of the proposed hierarchical residual refining framework is not clear, explain the motivation of hierarchical residual refining framework from the perspective of experiments **Response 4:** Thank you for inquiring about the motivation of our hierarchical residual refining framework. 1. Experimentally, Sec.4.2's Residual Visualization (on Pages 7 and 8) offers insight, especially Fig. 4. It depicts the outputs of each block, their cumulative effects, and the error map on a Perlin test set interpolation task. 1. The Main block primarily captures the interpolation task's essence but misses finer details. Subsequent Residual Blocks progressively capture increasing detail levels, with the third block being especially subtle. 2. As we layer the residual predictions, the cumulative output more closely aligns with the ground truth (refer to Fig. 3's left column), diminishing the error map. 2. Theoretically, scattered data interpolation often involves intricate details. Our framework, based on residual prediction, segments the interpolation task, easing the challenge. This tiered approach better captures the target function, outperforming previous techniques by fully utilizing information of observed point values, enhancing accuracy. --- [1] Cristian Challu, et al. N-hits: Neural Hierarchical Interpolation for Time Series Forecasting. arXiv preprint arXiv:2201.12886, 2022 [22] Boris N Oreshkin, et al. N-beats: Neural basis expansion analysis for interpretable time series forecasting. arXiv preprint arXiv:1905.10437, 2019. --- Rebuttal Comment 1.1: Comment: Thanks to the authors for their efforts in preparing the rebuttal. The authors have partially addressed my concerns. In particular, the performance of the proposed method with varying ratios of observed points is validated. However, the novelty of this paper is still not clearly addressed or demonstrated. Therefore, I am to keep my rating. The authors contribute the main difference between this paper and related existing works to applications into different data structures. This paper is claimed to address the problem of scattered data interpolation, while related works like [1] and [22] focus on time series forecasting. The difference between these two data structures is then elaborated. However, there still lacks an in-depth analysis or demonstration on such difference, either in theory or with empirical study. For example, how does the sparse data structure motivate the proposed method and what kind of different designs is required for sparse data structure compared to dense data structure or data without a fixed form (e.g., images)? Although capturing diminishing residual functions and transformer encoder are claimed to specifically solve the randomness of the scattered data but the relationship is not clearly explained and validated. Furthermore, this paper is of limited technical novelty, especially on the design of network architectures. The core difference between the proposed modified Transformer encoder for basic block implementation and existing methods such as TFR-Transformer [2], NIERT [4], and ANPs [14] is not clarified. In the response to Reviewer ce6Q, the authors claimed that NIERT [4] and HINT exhibit significant differences in architecture and methodology. However, using a series of lightweight interconnected blocks to replace the singular Transformer encoder in NIERT (architecture), and adopting a hierarchical architecture to progressively improve the results (methodology), are common techniques in deep learning. I cannot identify significant differences compared to existing methods, especially for the top conference like NeurIPS. --- Reply to Comment 1.1.1: Title: Response to Reviewer WnAW Comment: We appreciate the reviewer's efforts and comments. While we respect their perspective, we respectfully disagree on certain aspects. We'd like to address three specific concerns: the connection between our method and the scattered data structure, the novelty of our method, and the effectiveness of our approach. 1. **Connection to Scattered Data Structure:** Works on time series like [1] and [22] are tailored for ordered, dense, and contiguous time series data. In stark contrast, our method is specifically designed for scattered data interpolation, inherently and perfectly accommodating its structure. The arbitrariness and sparsity of scattered data necessitate its encoding and processing as a set, which is why we adopted the Transformer-based block, leveraging the permutation invariance property of its attention mechanism. We believe this is a widely recognized principle in scattered data interpolation research. Furthermore, tasks related to pixel interpolation, such as super-resolution and image completion, are vastly different in nature from scattered data interpolation. The substantial differences in data structure and distribution between them are evident and, we believe, require no exhaustive discussion. 2. **Novelty of Our Approach:** The reviewer perceives similarities between our basic block and NIERT and feels our overall framework echoes that of [1] and [22] in time series forecasting. Such an assessment, we argue, is an oversimplification. Our significant enhancements to the basic interpolation block, tailored for the residual prediction framework, shouldn't be overlooked, as detailed in Sec. 2.3 and the Supplementary material. This clearly differentiates it from NIERT. Moreover, our hierarchical residual refinement framework, combined with the *hierarchical local constraints*, is an integral and effective advancement that should be acknowledged. Additionally, within the domain of scattered data interpolation, we are the first to introduce such a hierarchical residual refinement framework, further underscoring our method's novelty. 3. **Effectiveness of Our Approach:** Our method has outperformed existing approaches in interpolation accuracy across multiple datasets, advancing the realm of high-precision scattered data interpolation. Moreover, the output objective components of each block transition in intensity from global to local scales, aligning with our hypothesis and exhibiting partial interpretability. This accomplishment warrants recognition. In summary, we're grateful for the reviewer's diligence and contribution. We hope our clarifications provide a more lucid and balanced evaluation of our work. ----- [1] Cristian Challu, et al. N-hits: Neural Hierarchical Interpolation for Time Series Forecasting. arXiv preprint arXiv:2201.12886, 2022 [22] Boris N Oreshkin, et al. N-beats: Neural basis expansion analysis for interpretable time series forecasting. arXiv preprint arXiv:1905.10437, 2019.
Summary: The authors proposed an algorithm named Hierarchical Interpolation Network (HINT) to predict unseen point values in scattered data to replace the usage of manually designed interpolation algorithms. The HINT used the residuals on observed points to guide target function estimation and the hierarchical local constraints in correlation modeling between observed and target points. Experiments show that it outperforms existing interpolation algorithms in accuracy on 3 synthetic datasets. -- Post rebuttal: I read the rebuttal from the auhtors. It addresses some of my concerns on the comparsions and I would like to keep my score. Strengths: The algorithm leverages a neural network to adaptively discern the underlying distribution of the target function from a function set or scattered dataset, achieving state-of-the-art performance on four datasets. Weaknesses: 1. Limited novelty in network design. Mask mechanism and partial attention are used in NIERT. The Embedding Stage and Correlation Modeling in Transformer blocks are very similar to NIERT. 2. Inadequate method comparisons. The paper compared Neural Processes algorithms proposed in 2018, 2019, and 2020 but the later developments in 2022 not included such as [1] and [2]. 3. It did not use PhysioNet dataset which is compared in NIERT. In SM: 1. Did not explain the reason why it has more FLOPs than NIERT in some datasets. 2. Mistake: In SM line 105, ‘On the PTV dataset,’ should be ‘On the Perlin dataset,’ [1] Wang, Qi, Marco Federici, and Herke van Hoof. "Bridge the Inference Gaps of Neural Processes via Expectation Maximization." The Eleventh International Conference on Learning Representations. 2022. [2] Liu, Huafeng, et al. "Learning Intrinsic and Extrinsic Intentions for Cold-start Recommendation with Neural Stochastic Processes." Proceedings of the 30th ACM International Conference on Multimedia. 2022. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: How many samples should be observed at least to guarantee the accuracy of your algorithm and this interpolation task? Is there any theory supported by its lower limit? Can it be used for more dimensional or more complex interpolation? Such as interpolation for pixels (inpainting, super-resolution, video interpolation)? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 4 excellent Contribution: 2 fair Limitations: The algorithm and the task setting did not consider the situation of incorrect observation (noisy) points which is common in theoretical and engineering domains. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ### Response to Reviewer ce6Q We are grateful to the reviewer for the comprehensive comments. The following responses are structured to address each comment. **Comment 1:** Limited novelty in network design.. **Response 1:** Our HINT and NIERT exhibit significant differences: 1. **On architecture**: NIERT is built upon a singular Transformer encoder as its core model. HINT employs a sequence of lightweight interpolation blocks based on the Transformer encoder, which are interconnected through residual connections. 2. **On methodology**: NIERT's approach involves directly predicting the function's values at target points. HINT strategy employs a hierarchical refinement process. It sequentially predicts the primary components of the function and a series of residual components and combines them. 3. **Adaptations from NIERT**: Our interpolation block is inspired by NIERT. Yet, we've made substantial adjustments to fit our broader architecture: - Masked observed points as input is introduced, enabling a re-prediction of the function values at the observed points. This, in turn, provides us with a function's residual at observed points, which guides the subsequent residual function predictions. - Local constraints applied in correlation modeling, which ensures that our interpolation block can interpolate the function's details at different scales. We believe our enhancements distinctly set HINT apart from NIERT, adding unique value and novelty. We trust this clarifies the concerns raised. **Comment 2:** Inadequate method comparisons.. **Response 2:** 1. After thorough research on recent NPs algorithms, we observed that most either mirror methods we've compared or aren't apt for accuracy comparisons, as outlined below. 1. **Regarding [1]**: We previously examined this method. After a detailed review on its appendix and [review feedback](https://openreview.net/forum?id=A7v2DqLjZdq&noteId=19Gw_u0rbIy), we observed its accuracy weren't notably superior to techniques like CNP. Since we've already benchmarked against CNP, ANP, and BANP, we felt it redundant to include this method in our comparisons. 2. **Regarding [2]**: We noted that this method applies NPs algorithms to recommendation systems. Consequently, it was not considered relevant for comparison in the context of our study due to its weak correlation. 3. **Highlight on Notable Recent Work**: It's worth noting that among the recent developments, Transformer Neural Processes (TNP) [3] stands out as a significant contribution. However, its architecture is almost identical to NIERT. To avoid redundancy, we chose not to include it in our comparisons. In summary, we've selected methods for comparison that are leading figures within NPs algorithms. Combined with NIERT and TFR-Transformer, we believe they offer a balanced comparison. **Comment 3:** It did not use PhysioNet dataset. **Response 3:** We omitted the PhysioNet dataset primarily because it's a 1D time series. As noted in [4], scattered data interpolation typically involves data points irregularly spaced in 2D or higher spaces. While NIERT assessed their model on PhysioNet, given our research focus, we didn't find it essential for our evaluation. **Comment 4:** why more FLOPs than NIERT in some datasets? **Response 4:** The two primary reasons are: 1. HINT encompasses several blocks, each embedded with an embedding and a prediction layer. NIERT only has a embedding and a prediction layer. 2. The input token consideration for HINT is greater, encompassing tokens from both the observed and the masked observed points in addition to the target points, leading to (2n+m) tokens, while NIERT uses (n+m) tokens. **Comment 5:** Typos. **Response 5:** The error has been corrected as suggested. **Comment 6:** On the number of observed points at least to guarantee the accuracy. Is there any theory supported by its lower limit? **Response 6:** We observed that a decrease in the number of observed points correlates with an increase in interpolation error. Thus, to maintain our model's accuracy, the minimum required observed points would align with those in our test sets. Regarding a theoretical foundation, crafting such a framework is complex and outside our study's current scope. Real-world data often eludes neat theoretical frameworks. Hence, pinpointing a direct theoretical link between observed point counts and our method's accuracy remains challenging. We appreciate your emphasis on the theoretical aspect, which undeniably stands as a crucial avenue for future research. **Comment 7:** On high-dimensional interpolation or more complex interpolation for pixels. **Response 7:** Regarding your question on high-dimensional interpolation, Reviewer 746Q posed a similar suggestion. Please refer to our response to 746Q where we've included pertinent experiments. Pixel-related interpolation tasks quite differs from scattered data interpolation. These tasks usually involve data points uniformly distributed on a dense grid, imbued with rich semantic information. Adapting our method, which is tailored for scattered data interpolation, to these tasks would likely necessitate substantial alterations. **Comment 8: ** Limitation in handling noisy data **Response 8:** Our study centers on high-precision interpolation, assuming observation points are accurate. Addressing noisy data poses unique challenges and falls outside this research's purview. We recognize its significance and might explore it in subsequent works. --- [1] Wang, Qi, et al. Bridge the Inference Gaps of NPs via Expectation Maximization. ICLR. 2022. [2] Liu, Huafeng, et al. Learning Intrinsic and Extrinsic Intentions for Cold-start Recommendation with Neural Stochastic Processes. ICM. 2022. [3] Nguyen, Tung, et al. Transformer Neural Processes: Uncertainty-aware meta learning via sequence modeling. arXiv, 2022. [4] Wendland, Holger. Scattered data approximation. Vol. 17. Cambridge university press, 2004. --- Rebuttal Comment 1.1: Title: Response to Reviewer ce6Q Comment: We sincerely thank the reviewer and the AC for their efforts. We hope our response adequately addresses and clarifies the concerns raised. --- Rebuttal 2: Comment: Dear Reviewer, Could you please acknowledge that you have reviewed the author's response? If you have decided to maintain or revise your score, kindly provide a brief explanation if the rebuttal has or hasn't addressed your initial concerns adequately. Your prompt feedback is essential to the evaluation process. Best regards, Your AC
Summary: This work presents a new transformer based approach for scattered data interpolation. By extending NIERT [4] in several aspects, authors achieved improved results in the target task. The extensions include 1) hierarchical residual refinement to produce fine-grained interpolation results and 2) hierarchical local constraints to constrain the set of observed points used in the interpolation block. In short, unlike existing approaches, authors attempt to conduct the scattered data interpolation in a progressive fashion by estimating interpolation residuals within gradually narrowing search space. Strengths: 1) The proposed components were reasonably designed to obtain fine-grained interpolation results. 2) Intensive ablation studies support the effectiveness of these components. Weaknesses: 1) The datasets used in experiments consists of two-dimensional data. Performance should be evaluated with sparse dataset of much higher dimension. 2) Some explanations were given in the embedding stage, but it is very difficult to interpret why the masked tokens for the given observed points are necessary. 3) The local constraint K may be problematic, as it can be dependent on the data properties of each application. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: Please consider the weaknesses 1), 2), 3). Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: N.A. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ### Response to Reviewer RpoB Many thanks to the reviewers for the thorough feedback. We will proceed to respond to each of the comments provided. **Comment 1:** The datasets used in experiments consists of two-dimensional data. Performance should be evaluated with sparse dataset of much higher dimension. **Response 1:** We acknowledge the importance of evaluating performance on sparser datasets of higher dimensions. Our initial choice of two-dimensional datasets was driven primarily by their availability and prominence in the literature. In response to your suggestion, we synthesized a 10-dimensional dataset, termed as "D10". Within this dataset, each function for sampling an interpolation task arises from the summation of several 10-dimensional Gaussian functions, chosen randomly. We compared the interpolation performance of HINT against existing approaches on this dataset, and the results can be observed in the appended table. Notably, the outcomes demonstrate that the interpolation precision of HINT can effectively scale to datasets with much higher dimensions. |Interpolation approach|Interpolation accuracy (MSE $\times10^{-4}$) on D10| |:---:|:---:| |CNP|35.623| |ANP|12.578| |BANP|12.077| |TFR-Transformer|7.465| |NIERT|5.496| |**HINT**|**4.173**| Due to space constraints, we are unable to provide a detailed description of the D10 dataset here. The comprehensive construction process of D10 and the associated results will be placed in the supplementary materials. **Comment 2:** It is difficult to interpret why the masked tokens for the given observed points are necessary. **Response 2:** The primary intention behind using "masked observed points" is to ensure consistency with the masked target points being input. By doing so, we can obtain re-predictions on the observed points $\hat{y_O}^{(l)}$, which is consistent with the predictions on target points $\hat{y_T}^{(l)}$. This consistency stems from the fact that both predictions emerge from the same function: $\hat{f}(x_{\mathrm{masked}}) = \mathrm{Block}^{(l)}(x_O, y_O, x_{\mathrm{masked}})$ evaluated at both observed and target points. Consequently, the residuals from re-predictions on the observed points and the residuals from predictions on the target points are strongly correlated. By easily obtaining the former, we can effectively predict the latter. We appreciate the reviewer's keen observation, and we hope this clarification elucidates our methodology more comprehensively. **Comment 3:** The local constraint K may be problematic, as it can be dependent on the data properties of each application. **Response 3:** Indeed, the hyperparameter $K$ for the local constraint is application-dependent, an aspect we have given careful consideration. However, we don't perceive this as "problematic". In our research, the value of $K$ for each dataset was meticulously determined through a series of experiments to ensure optimal performance. Moreover, we validated the efficacy of this parameter through ablation studies. We believe that, with appropriate selection, the constraint remains highly effective. We hope this addresses your concerns. --- Rebuttal Comment 1.1: Comment: Several parts were well addressed in the rebuttal, and I have no more concerns in this work. I'd like to keep the initial rating (borderline accept). --- Reply to Comment 1.1.1: Title: Response to Reviewer RpoB Comment: We appreciate the reviewer's efforts and response. It's gratifying to know that we've successfully addressed and resolved all the concerns raised.
Summary: In this paper, a novel hierarchical residual refining framework called HINT (Hierarchical Residual Refining for Scattered Point Interpolation) was proposed to improve interpolation accuracy. The framework utilized residual information from observed points to guide the prediction of target points in a coarse-to-fine manner, employing correlation modeling. HINT consisted of a series of neural network interpolation blocks, including a main block and multiple residual blocks, which estimated different components of the target function at various scales. Local constraints were applied to the interpolation blocks using K-nearest neighbor graph constraints, conforming to hierarchical residual scales. The hierarchical structure of the main and residual components was leveraged, enabling progressively stronger local constraints to enhance interpolation robustness. Experimental evaluations on diverse theoretical and application scenarios demonstrated the superiority of HINT over existing state-of-the-art interpolation methods in terms of accuracy across various tasks. Strengths: The paper is well-written and it provides concise descriptions of the proposed framework. The experiments conducted in this paper are comprehensive. Weaknesses: A couple of assumptions are used, for example in Sec. 2.3, the authors failed to demonstrate that they all holds in experiments. The proposed framework is simply a combination of existing modules. Technical Quality: 3 good Clarity: 3 good Questions for Authors: More literature reviews should be provide to demonstrate the necessity of using Transformer in interpolation. What are the main differences of the proposed method when compared with the Transformer encoder-only method in [4]? The MSE results in Fig.5 in Supplementary Material can be separated into a table for better clarity, and the authors can provide a case by case analysis for those figures in Supplementary Material. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: None Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ### Response to Reviewer G7AZ We appreciate the reviewer's constructive feedback. In the following sections, we address each point raised. **Comment 1:** The assumptions made in Section 2.3 are not validated experimentally. **Response 1:** In relation to the assumptions highlighted in Section 2.3, we'd like to offer the following clarifications: 1. **Foundational Work and Validation**: - Our approach is built in the NIERT interpolator [1]. The effectiveness of this methodology is comprehensively discussed in [1], establishing it as a SOTA model for scattered data interpolation. 2. **Our Innovative Design and Verification**: - **Incorporation of masked observed points**. It has been empirically validated in our ablation study (refer to Table 5, Page 8, third row). Our findings indicated a notable decline in precision when leveraging observed points without the inclusion of "masked observed" points. - **Local constraints on correlation modeling**. This design have been extensively corroborated in Section 4. For instance, Fig. 4 (on Page 8) elucidates how the interpolation blocks in HINT sequentially yield the main component of the target function, succeeded by residuals of diminishing magnitude. Furthermore, Table 5 (on Page 8) reinforces this through ablation studies, indicating that the performance of HINT, when devoid of local constraints or with non-hierarchical local constraints, is inferior to our proposed HINT. We hope that the above elucidations address your concerns. Should there be any oversight or if further clarifications are required, we humbly solicit your guidance. **Comment 2:** The proposed framework is simply a combination of existing modules. **Response 2:** We would like to take this opportunity to further elucidate the unique contributions and the value of our research. 1. **Beyond a Mere Combination**: While we have indeed leveraged the N-Beats from the time series forecasting and NIERT, our work is not just about juxtaposing two techniques. We have undertaken substantial modifications. Specifically: 1) We significantly altered the NIERT block to enable it to output re-predictions of observed points that are consistent with target point predictions. 2) Furthermore, we incorporated local constraints based on KNN Graph, which facilitates adaptation to the varying local scales inherent in function interpolation. 3) For the interpolation blocks, we incorporated progressively stringent local constraints. Early blocks focus on capturing the primary components of the function, while subsequent blocks hone in on the intricate details. This hierarchical fashion allows for a nuanced refining of interpolation. 2. **Pioneering Application of Dual Residual Architecture for scattered Data Interpolation**: Our study is the first to introduce such residual architecture into the realm of scattered data interpolation. 3. **Superior Experimental Results**: Our method has demonstrated exemplary performance in experiments, achieving state-of-the-art results. **Comment 3:** More literature reviews should be provide to demonstrate the necessity of using Transformer in interpolation. **Response 3:** Indeed, the existing literature specifically addressing the application of Transformers to scattered data interpolation is somewhat limited. This is the very reason our manuscript heavily references works like NIERT [1] and TFR-Transformer [2]. We can elucidate the superiority of Transformer from the following perspectives: 1) **Inductive Bias**: The Transformer architecture is particularly suited for handling scattered point data due to its inherent permutation invariance. 2) **Recent Advancements**: NIERT stands out as one of the most recent and high-performing scattered data interpolation models, effectively showcasing the superiority of Transformer-based models for such tasks. 3) **Theoretical Underpinnings**: Both references [1] and [3] draw connections between the Transformer framework and traditional interpolation algorithms. Specifically, [1] posits that the self-attention in Transformers can be viewed as a neural representation for interpolation basis function learning. Such perspectives provide a theoretical foundation, underscoring the strong alignment between Transformers and interpolation. We will endeavor to expand upon and clarify these points in the revised manuscript. **Comment 4:** main differences of the proposed method when compared with the Transformer encoder-only method in [4]? **Response 4:** NIERT and our proposed HINT exhibit significant differences, which can be summarized as follows: 1. **Architectural Differences**: The Transformer encoder-only method, termed as NIERT, employs a singular Transformer encoder as its main model. In contrast, our proposed methodology, named HINT, integrates a series of lightweight blocks, rooted in the Transformer encoder architecture, and connected through residual links. 2. **Methodological Variations**: NIERT directly predict target point values using a whole piece model. Our HINT adopts a hierarchical refining process. It firstly predicts the main component of the function, followed by a series of residual components incrementally. **Comment 5:** The MSE results in Fig.5 in Supplementary Material can be separated into a table for better clarity... **Response 5:** For clarity, we will adjust the presentation in accordance with the reviewer's suggestions. --- [1] Shizhe Ding and Dongbo Bu. NIERT: Accurate Numerical Interpolation through Unifying Scattered Data Representations using Transformer Encoder. arXiv preprint arXiv:2209.09078, 2023. [2] Xiaoqian Chen, Zhiqiang Gong, Xiaoyu Zhao, Weien Zhou, and Wen Yao. A Machine Learning Modelling Benchmark for Temperature Field Reconstruction of Heat-Source Systems. arXiv preprint arXiv:2108.08298, 2021. [3] Cao, Shuhao, Peng Xu, and David A. Clifton. How to understand masked autoencoders. arXiv preprint arXiv:2202.03670, 2022. --- Rebuttal Comment 1.1: Comment: Based on the rebuttal made in response to my comments, the authors have adequately addressed the majority of the concerns raised. --- Reply to Comment 1.1.1: Title: Response to Reviewer G7AZ Comment: We sincerely thank the reviewer for the efforts and feedback. We are heartened to hear that we have adequately addressed the majority of the concerns raised.
Rebuttal 1: Rebuttal: ### Supplementary Result to Reviewer WnAW We evaluated the impact of varying observed point counts on interpolation accuracy using the Mathit-2D dataset, as shown in the supplementary figure. On Mathit-2D, both our HINT method and other techniques exhibit a marked reduction in interpolation error (MSE) as the number of observed points increases. Notably, HINT outperforms other methods across all observed point settings, underscoring its superior performance. Pdf: /pdf/26374507a9c75afd9c076750d0ab286d925ceef8.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: The paper introduces the Hierarchical INTerpolation Network (HINT) as an accurate interpolation algorithm for various theoretical and engineering applications. HINT consists of lightweight interpolation blocks arranged sequentially, where the first block estimates the main component of the target function, and subsequent blocks predict residual components using observed point residuals from preceding blocks. Moreover, the authors introduce hierarchical local constraints. Extensive experiments demonstrate that HINT significantly outperforms existing interpolation algorithms. Strengths: 1. The method that utilizing the first and subsequent blocks to estimate the main component and residual components is novel. It applies the residual method to interpolation algorithm, which is also interesting. 2. The whole method is easy to follow. 3. This paper conduct extensive extriments on various datasets, demonstrateing its superiority. Weaknesses: 1. The comparison of the time cost is missing. It's necessary to compare the proposed HINT and other methods including the traditional ones and deep learning ones. 2. The paper lacks the ablation study of hyper-parameter L and loss scale $\lambda$. Besides, does the value of $\lambda$ keep the same for all the datasets? 3. For Mathit-2D and Perlin, the choich of L and K is (2,6) and (4,2), respectively. Could the author provide some guidlines for the choice ? Or it may Hinder the spread and use of this method. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: see above Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 3 good Contribution: 3 good Limitations: yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ### Response to Reviewer 746Q We sincerely thank Reviewer 746Q for the insightful comments and suggestions. We will address each comment in the following. **Comment 1:** The absence of time cost comparison between HINT and other methods and comparison with traditional methods. **Response 1:** In response to the comment regarding the comparison of time cost, we have performed additional evaluations. Specifically, we assessed the average interpolation time cost of our proposed HINT method and existing interpolation techniques, both traditional and deep-learning-based, on the Mathit-2D test set. Following the work from [1], we selected RBF [2] and MIR [3] as representatives of traditional algorithms. The results are tabulated below: |Interpolation approach| Average time (ms)|Interpolation accuracy (MSE $\times10^{-4}$)| |:---:|:---:|:---:| |RBF|0.66|34.706| |MIR|80.30|27.460| |CNP|13.53|24.868| |ANP|27.55|14.001| |BANP|29.19|8.419| |TFR-Transformer|26.16|5.857| |NIERT|31.99|3.167| |**HINT**|36.20|**2.722**| As evident from the table, the traditional interpolation method, RBF, offers the highest interpolation efficiency at 0.66ms, but its accuracy is substantially compromised. MIR offers slightly better accuracy, it possesses the longest interpolation time amongst all methods evaluated (80.30ms). For deep neural network-based methods, there is a clear trend of increasing accuracy from CNP to our HINT method and the time cost also gradually rises. Specifically, our HINT method has an average time of 36.20 ms, which, while marginally higher than that of NIERT (31.99ms), is considerably more efficient than the traditional MIR (80.30ms). Moreover, it's worth noting that, our current HINT implementation employs dense attention computation combined with an attention mask to realize the hierarchical local constraints proposed in our paper. Given that these local constraints are theoretically sparse, there remains room for optimizing the time cost of HINT. Our primary focus in this research was to achieve high interpolation accuracy, and, as of now, HINT appears to be best suited for scenarios demanding high accuracy while being somewhat lenient on time constraints. **Comment 2:** lacks of the ablation study of hyper-parameter $L$ and loss scale $\lambda$. **Response 2:** In fact, we conducted ablation studies on the hyper-parameter $L$ and loss scale $\lambda$. The ablation study concerning the block number $L$ can be found in Table 6 on Page 8. The ablation experiment for the auxiliary loss weight $\lambda$ is presented in the first row of Table 5 on Page 8, specifically under the section "HINT w/o $\mathcal{L}_{\mathrm{aux}}$", where this implies $\lambda=0$. To clarify further on your query regarding the value of $\lambda$, it does vary, and these specific values are detailed in the last row of Table 2 on Page 3 in the supplementary material. We arrived at these values after a series of experiments to fine-tune our approach. We appreciate your question as it has made us realize that we might not have explicitly highlighted these details in our current presentation. We will emphasize these more prominently in the revised version of our manuscript to prevent any ambiguity. **Comment 3:** The guidelines for choosing hyperparameters (block number $L$, attention layer number, and local parameter $K$). **Response 3:** If I understand correctly, the second hyperparameter you mentioned is the number of attention layers in the main block. In fact, the choice of these hyperparameters is not arbitrary. We outline the rationale for setting these parameters as follows: 1. **Block number ($L$)**: In our experiments, we observed that for functions with smoother profiles and fewer details within a set, a smaller block number $L$ is appropriate. Conversely, for functions rich in details and containing higher frequency components, a larger block number $L$ is more suitable. 2. **Attention layers in the main block**: Concerning the number of attention layers in the main block, our findings indicate that larger datasets demand more attention layers to attain sufficient model capacity. On the other hand, for smaller datasets, fewer attention layers can help prevent overfitting. 3. **Local constraint parameter ($K$)**: Setting $K$ is somewhat challenging, requiring insights into the data or even expert knowledge. Such knowledge encompasses spatial auto-correlation of the function. If the function exhibits auto-correlation over a larger neighborhood, a larger $K$ might be more appropriate. In our experiments, we found that determining the optimal $K$ often requires trial and fine-tuning. For instance, for the Mathit dataset—a synthetic set comprised of smoother mathematical functions with a significant volume—a smaller block number $L$ and more attention layers seem optimal. The mathematical functions' features correlate over longer distances (periodicity, symmetry, trends, etc.), thus initially a larger $K$ covering all observation points is prudent. Conversely, for the Perlin dataset—a real-world 2D velocity field dataset of smaller volume but intricate, detailed functions—a larger block number $L$, fewer attention layers, and a smaller $K$ are logical. We appreciate the reviewer's constructive feedback and will incorporate this discussion in the Supplementary Material. ------ [1] Shizhe Ding and Dongbo Bu. NIERT: Accurate Numerical Interpolation through Unifying Scattered Data Representations using Transformer Encoder. arXiv preprint arXiv:2209.09078, 2023. [2] M. J. Powell, Radial basis functions for multivariable interpolation: a review. Algorithms for Approximation, 1987. [3] Q. Wang, P. Moin, and G. Iaccarino, A high order multivariate approximation scheme for scattered data sets, Journal of Computational Physics, vol. 229, no. 18, pp. 6343–6361, 2010. --- Rebuttal Comment 1.1: Title: Response to Rebuttal. Comment: We thank the author for rebuttals. The comparison of the time cost seems fine. My main concerns are addressed. --- Reply to Comment 1.1.1: Title: Response to Reviewer 746Q Comment: We are grateful for the reviewer's understanding and acknowledgment of our rebuttals. It's reassuring to know that we have successfully addressed your main concerns. We thank the reviewer for the constructive feedback throughout this review process.
null
null
null
null
null
null
Provably Efficient Offline Reinforcement Learning in Regular Decision Processes
Accept (poster)
Summary: The paper presents an offline RL algorithm to learn near-optimal policies in a (episodic) Regular Decision Process (RDP). The problem is to learn a policy given a data set. The algorithm is split into two parts. First, is to learn the transition function of the RDP. Then, the problem of off-line learning on RDP is reduced to offline learning in MDP (intuitively, the underlying automata in the RDP is extracted) to generate a near-optimal policy in the RDP. The upper bound provided in this setting improves upon an upper bound of a similar algorithm in the non-episodic variant and a lower bound is also provided. Strengths: Good problem. Well-written paper. Contributions and algorithms are well-stated and well-contextualized w.r.t. related work. Weaknesses: NA. Technical Quality: 3 good Clarity: 3 good Questions for Authors: What may be some challenges in adopting these algorithms in practice? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: NA. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you very much for your careful reading and recognition of our novelty. Please find our response to your questions and concerns below. ### Response to Questions: Input prior knowledge and assumptions are, generally, the main limiting factors to the practicality of learning algorithms. Our algorithm works under a few assumptions, as listed in the paper, and only a few inputs. Besides the input dataset, we can see that ADACT-H only requires a confidence parameter $\delta$ and an accuracy parameter $\varepsilon$, both of which are user-specified. This is a minimal set of input knowledge. The second variant, ADACT-H-A, has a different trade-off between input knowledge and assumptions. Depending on the specific application, either solution might be more appropriate than the other. We currently work under the assumption that rewards distributions are supported on a finite set. More general reward distributions are an important extension that would allow RDP to generalise to new classes of environments. We leave such a modification for future work. We will include part of this discussion in the final revision of the manuscript.
Summary: The authors presents a novel algorithm for offline RL in episodic Regular Decision Processes (RDP), a computationally feasible subclass of Non-Markovian Decision Processes. The presented algorithm have two phases: 1) learning the underlying automata by a novel algorithm, and 2) Markov transformation of the rest of the dataset and use any state-of-the-art algorithm for offline RL in usual MDPs. To obtain a theoretical guarantees, authors presents a notion of concentrability coefficient in the RDP setup. Finally, the lower bounds on sample complexity were presented. Strengths: - The first algorithm for offline RL in a challenging setup of Regular Decision Processes with provable guarantees. - A novel algorithm for automata learning that have a much tighter sample complexity than in the previous work. - The lower bounds shows that the introduced concentrability coefficient is indeed makes sense from the point of view of learning in RDB. Weaknesses: - Lack of empirical validation. The empirical experiments even on toy examples should much increase the value of the presented paper. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: - What are the limitation to generalize the presented techniques beyond the tabular setting? - Is it possible to propose direct algorithm that will not depend on reduction to usual MDPs? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: This is a theoretical paper that does not need to address the potential societal impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you very much for your careful reading and constructive suggestions. Please find our response to your questions and concerns below: ### Response to Questions: - We believe extension to discrete structured RDPs (with a known structure) could pose some mild challenges, in terms of algorithm design and analysis thereof, and may require some assumptions to be imposed. However, the task is certainly far from straightforward and the said challenges depend on the type of structure. In contrast, in the case of RDPs with infinite state and observation spaces, one must resort to using substantially different ideas, and we believe the problem becomes an interesting open question for future research. In the latter case, the first and most natural generalisation is considering reward distributions with continuous support. To do so, depending on the reward distribution class that is assumed, the "TestDistinct" function would need to be updated accordingly, as well as its analysis. The statistical test would be specific to continuous distribution and not be based on counting schemes. Although a similar modification might be done for observations, these pose an additional challenge with respect to rewards. In fact, observations are both outputs and inputs for our model. An infinite observation space would immediately imply an infinite number of input symbols and infinite potential transitions from each RDP state. This is a much more meaningful change that possibly alters the class and the expressiveness of the underlying transducer. This case would require an independent study to understand its impact. - We believe deriving a direct algorithm is a very interesting future direction that renders very challenging and calls for novel algorithmic ideas. Intuitively, such a direct algorithm avoids a sharp two-phase separation and would allow for interleaving automata learning steps with value estimations steps. This is desirable in order to direct RDP learning steps toward regions with high value estimates, thus resulting in a more sample-efficient algorithm for RDPs. However, deriving theoretical sample complexity bounds for such an algorithm could be very challenging. On the other hand, the approach taken appears natural in view of the fact that each RDP has an associated MDP, which is very easy to construct or simulate. Our current understanding is that the reduction is not a source of complexity in itself; it was powerful enough to yield a sample complexity bound that matches some factors in the lower bound. Overall, we believe this is a very promising future direction, despite its involved challenges. Thus, we will include this discussion in the conclusion. --- Rebuttal Comment 1.1: Comment: I would like to thank the authors for their answers and additional discussions. The comments have addressed my questions and I decide to keep my score.
Summary: The paper presents RegORL, an algorithm for offline RL in episodic RDPs. The algorithm combines automata learning techniques with state-of-the-art offline RL algorithms for MDPs. The authors provide a non-asymptotic high-probability sample complexity bound for RegORL, which guarantees the learning of an $\epsilon$-optimal policy. They also establish a sample complexity lower bound for offline RL in RDPs. Strengths: 1. The paper comprehensively and rigorously explores algorithms for offline RDPs while leveraging automata learning techniques. This contribution carries great significance within the realm of offline RL theory. 2. The structure of this paper is well-organized, resulting in a smooth and coherent reading experience. The comprehensive summary of relevant literature further enhances its overall completeness. Weaknesses: 1. It would be beneficial to present a comparative analysis of RDP algorithms in tabular form, facilitating a clearer understanding of the different studies in this area. 2. In what ways does the offline setting present additional difficulties beyond the combination of commonly used techniques from offline RL and the technique used in online RDP research [1]? [1] Alessandro Ronca and Giuseppe De Giacomo. Efficient PAC reinforcement learning in regular decision 418 processes. In IJCAI, pages 2026–2032, 2021. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: 1. Does the claim made by the authors in line 98, stating that computing a near-optimal policy is challenging even for a known POMDP, imply that this difficulty does not exist in the case of RDPs? 2. Can the techniques proposed in this paper be applied to settings where the state or observation space is infinite, such as RDPs with a linear structure? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: The suggestions have been claimed in "Weaknesses" and "Questions". Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you very much for your careful reading and constructive suggestions. Please find our response to your questions and concerns below: ### Response to Weaknesses: 1. As suggested by the reviewer, we will include a more explicit comparison with the other papers that address RL in RDPs. We had identified three works [25, 27, 28]. These have been discussed more extensively in the appendix. However, we will make the comparison more explicit and also include the important differences in the main body in the final version. 2. Offline RL in RDPs has some specific features with respect to the two, taken in isolation. One important difference is the need for a new single-policy RDP concentrability coefficient, which we defined. Its main difference with respect to the single-policy MDP concentrability is that each occupancy measure is computed on features that are not present in the dataset. Also, with regards to [1] (Ronca and De Giacomo, 2021), we mention that it addresses online RL in RDPs in a discounted, infinite-horizon setting. Importantly, this algorithm uses the uniform policy for learning. Hence, using the machinery in [1] could be adapted to our setting, only under the very strong assumption that the behaviour policy is uniform. Even in this case, such adaptation would imply sample efficiency bounds that are much looser than the ones we show here. In particular, the bound would roughly scale according to $O^{15}$ and $\epsilon^{-10}$, which is far larger than our sample complexity bounds. We will clarify this in the final version. ### Response to Questions: 1. In line 98, we mention the computational complexity of finding the optimal policy of a known POMDP. This is PSPACE-complete. For RDPs, instead, thanks to their internal structure, optimal policies can be computed very efficiently, in polynomial time. When the RDP is known, the solution is to compute the MDP of Definition 3 and find its optimal policy. 2. The answer differs depending on whether we consider infinite states, observations or rewards. Regarding states, the finite number of hidden states is tightly related to the finite automaton representation on which the RDP is built. This is part of what distinguishes them from more general formalisms such as POMDPs and yields the computational advantages discussed for Question 1. Such modification would strongly alter the specificity of RDPs. An infinite number of observations, on the other hand, leads to an interesting generalisation, which would allow RDPs to capture new interesting environments. The main challenge is that observations are part of the input alphabet of the transducer. In fact, an infinite number of input symbols would immediately impact the number of potential transitions from each state. This additional power modifies the class of automata under study, which would require an independent study. These are so-called symbolic automata. See, e.g., “The Power of Symbolic Automata and Transducers” by D'Antoni and Veanes, in CAV 2017. Extending RDPs to symbolic automata is an interesting direction for future work. On the other hand, we believe that an infinite set of rewards is a very natural extension that is worth considering. Continuous reward functions do not conflict with the finite transducer structure or its transition function. Mainly, the "TestDistinct" function would need to be updated accordingly, as well as its analysis. This is an interesting direction for future work. We include and discuss both as future directions in the revised version.
Summary: This work studies the problem of offline reinforcement learning (ORL) of episodic regular decision processes (ERDP) where we wish to find an near-optimal policy for an unknown ERDP with a small dataset of trajectories (collected with a behavioral policy). The authors took a reductive approach to first find the states and transitions of the minimal automata (by a proposed algorithm ADACT-H) underlying the ERDP and then to transform the original dataset into an MDP (using the estimated automata states as MDP states) and lastly to apply any existing ORL algorithm to the resulting MDP. The theoretical results are substantiated with theorems, necessary definitions and assumptions. Strengths: 1. The exposition is easy to follow and the overall logic is coherent and sound. 1. The presented results support claims of contributions and seem novel. 1. The problem studied is interesting and I think that the reductive approach in this work involving recent results from automata theory (ADACT which learns a minimal Moore machine) may inspire other RL studies. Weaknesses: 1. Perhaps the result of a conscious choice of tradeoff in the presentation, it seldomly highlights the key theoretical challenges and the authors' insights in the main text of the paper. Maybe the authors could reclaim some space for this purpose from example 1? (It is not referenced in the main text.) 1. In a similar vein, I wish that there is more technical discussion on (promising) un/under-explored alternative approaches and implications for related problem settings in closing. This should help contextualize the contributions of the work for a broader audience. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. The trajectory length is fixed ($H$) the episodic setting, which implies a fixed length language (unlike general automata which can accept variable length sentences). Furthermore, the automata state space is stratified over tilmestep--making the DFAs acyclic--, I wonder if this makes the full generalization of DFAs a good fit for the episodic setting. 1. Related to above, could you comment on the relevance of your approach to non-episodic setting? The DFA modeling of non-episodic environments seems more natural. 1. Is there some interesting interaction between the learned DFA and subsequent ORL? For example, does it matter if the learned DFA is slightly larger than minimal? Is there some kind of tradeoff worth sharing or further study? What are some looseness in the reduction approach you took? 1. Assumption 1, which asks for all observation to be present in the dataset, seems very restrictive (perhaps impossible for many ERDP). Since the ORL step is over $\mathcal{Q}\times\mathcal{A}$, is it possible to relax the assumption? Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: NA Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you very much for your careful reading, constructive suggestions, and recognition of our novelty. Please find our response to your questions and concerns below: ### Response to Weaknesses: As suggested, we will give further context and background about the main challenges in the introduction and closing. The discussion will include the following observations: One of the main challenges of working with non-Markovian decision processes is to develop learning algorithms that avoid exponential dependencies in the horizon in both sample- and computational-complexity. In our algorithm, we were able to achieve this result by incrementally learning each state of the RDP (all the states in ADACT-H or a subset of states in ADACT-H-A) and carefully accounting for the contribution of learning each state to the sample complexity. This role is mainly played by the distinguishability factor, which modulates the complexity of identifying each state for a specific RDP instance. In many instances, this is what allows our sample complexity analysis to remain polynomial in the relevant factors. Finally, our algorithm has low computational complexity by design. As future work, we believe that valuable directions can concentrate on alternative parameterizations or improvements on the definition of distinguishability. In particular, the development of a state-merging test that accounts for the (unknown) RDP structure of the distributions being tested. ### Response to Questions: 1. The automata that define Episodic RDPs are acyclic. More precisely, the class we consider is the one composed of all and only the RDP automata that generate a stop symbol after $H$ transitions. As a consequence, any automaton in this class is acyclic. This means that our work considers the most general class of automata that is consistent with a fixed horizon. This RDP definition also remains close to the ones that are present in the literature, because a non-Episodic RDP would simply be an automaton that is not forced to output $o_\bot$ after $H$ transitions. The explicit layered structure mainly serves to allow for some optimizations when testing RDP states. In fact, our automata learning algorithms ADACT-H and ADACT-H-A are specialised for the episodic setting, so as to have better performance than generic algorithms, which must account for the presence of cycles. 2. Our work is the first one addressing offline RL in RDPs. Fixed-horizon MDPs are very well-studied and popular in the theoretical RL literature (see, for example, [12, 18]), in both online and offline settings, and they appear to have relatively gained more attention than their infinite-horizon counterparts, perhaps mostly because many practical tasks of interest are episodic in nature. Our paper contributes to this line of research, extending it to RDPs. In our work, we exploit the finiteness of the horizon in two ways: both to allow some algorithmic optimizations, as discussed above, and to take advantage of the horizon in our analysis. We do agree, however, that other interactions and optimality criteria are also interesting, such as the discounted, infinite horizon setting. This would require some independent treatment that we also identified as a possible future direction of our work. 3. For computing an $\epsilon$-optimal policy, it is sufficient that the Offline RL algorithm receives a dataset that respects the Markov assumptions in the new states and the original rewards. To guarantee this, it is sufficient that the function "TestDistinct" never returns a false negative. Minimality of the automaton, instead, is guaranteed by the absence of false positives. This second condition, however, is of much more minor importance than the former, as it only causes an increase in the sample complexity of the offline RL part, without any impact on correctness. 4. Assumption 1 is only needed for the variant that learns the complete model (i.e., the one of Theorem 6), not for the approximate version of the algorithm, the one appearing in Theorem 8. Furthermore, Assumption 1 effectively means that the behaviour policy must select every action with positive probability, but we can modify the statement to omit the probability of observations. This is consistent with equation (2), with the convention that 0/0 = 0. We will improve the assumption and the surrounding text to clarify both points.
null
NeurIPS_2023_submissions_huggingface
2,023
Summary: This paper studies the sample complexity of offline reinforcement learning (RL) in environments with non-Markov observation, i.e., partially observable Markov decision processes (POMDPs). In specific, the paper considers the episodic regular decision process (RDP) where the spaces of state, action, and observation are all finite. Given a batch of offline data, the proposed (offline RL) algorithm learns the ''states'' from the history data and converts the problem as offline RL for MDPs. The paper provides sample complexity bound for learning the states, as well as the optimal policy under certain assumptions. The paper also provides lower bound to prove the optimality of the upper bounds. Overall, the reviewer feels that the proposed algorithm for POMDPs brings new insight to the RL community and the theoretical results are strong enough to be accepted, even though they are built on strong assumptions, for example the distinguishability condition. As claimed by the authors, this is the first RL complexity bound of the setting and approach. Please see more details in the strengths and weakness parts. Strengths: The episodic regular decision process is conceptually a very large set of models that can be used to model many realistic environments. The 'state' the proposed algorithm trying to learn is essentially a sufficient statistics of the history for inference of future observations (folloiwing the behavior policy), which is very similar to the predictive stats representation (PSR) but slightly different in key assumptions (and leading to different learning approach). If my understanding is correct, every episodic finite-state POMDP is also an episodic RDP since we can always define the concatenation of the entire history or the belief, i.e., conditional distribution, of the 'hidden state' as the 'state' of the RDP. Weaknesses: 1. The methodology that we first learn the state and then apply RL algorithm to the transformed data is quite a quite a straightforward approach. And it seems that Assumption 2 is the critical reason that we can eventually obtain the polynomial sample upper bound $\sqrt{H}log(QAO)/(d\mu)$ following such a straightforward approach. My biggest concern will be the scale of $\mu_0$ which is related to the distinguishability. In particular, $\mu_0$ represents the $L_\infty$ distance between distributions over $e_{t:H}$, which will be $(OA)^{H-t}$-dimensional vectors with bounded $L_1$ norm. Roughly speaking, it feels that the average $L_\infty$ distribution will be $(OA)^{-(H-t)}$. Then, unless the maximal distance $\mu_0$ has a jump in the orders due to some special structure of the RDP, we seem to have exponential dependency on $H$ in the sample complexity. Perhaps more discussion of the scale of $\mu_0$ with examples would be necessary. 2. Assumption 1 is also a very strong assumption and greatly reduces the difficulty of learning the states and optimal policy. I am wondering if we can still obtain the current strong results by adding pessimism (e.g., as in [18]) into the current algorithm and removing Assumption 1. 3. More technical discussion about the (reason for) improvement compared with the previous results [10, 27] in the main paper would be helpful for readers to understand the contribution of the current paper better. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: - Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you very much for your careful reading and constructive suggestions. Please find our response to your questions and concerns below: 1. In the worst case scenario, the reviewer's intuition is correct. The distinguishability parameter in $L_\infty$ distance may be exponentially small in the remaining horizon. This is indeed the case in the hard instance used for proving the lower bound. In the RDP of Figure 3 of the appendix, the two paths $q_{01} \dots q_{0L}$ and $q_{11} \dots q_{1L}$ are indistinguishable for $L-1$ steps, when they generate strings with uniform probabilities, and differ only after $L$ steps. Each string of length $L$ has a probability of $2^{-L}$ of being generated. However, this does not imply that distinguishability is necessarily small. In fact, two factors contribute to a larger parameter $\mu_0$, as desired: - First, we recall that we compute distinguishability with respect to the *prefix* $L_\infty$ distance, which we denote by $L_\infty^\mathsf{p}$. The prefix distance computes the maximum distance between distributions over strings of all lengths. This means that if two states can be separated just by looking at suffixes of length $K$, where $K$ could be as small as 1, for effects that are immediately observable, the distinguishability parameter for those two states will scale as $2^{-K}$, in the worst case, not $2^{-H}$. - A second factor that allows $\mu_0$ to be large is the existence of one or few witnesses for distinguishability: these are traces of any length in $[1,H]$ that have high probability of being generated from one state but not from the other. Looking at the extreme side of the spectrum, we can consider RDPs with deterministic outputs, which are still non-Markovian environments. In this case, the prefix $L_\infty$-distinguishability coincides with $\max_{u \in [t]} \sum_{e_{0:u}} \left| p_1(e_{0:u} *) - p_2(e_{0:u} *) \right|$. More generally, all the intermediate cases are possible. 2. Assumption 1 is only needed if the full algorithm is combined with ADACT-H, which learns the complete RDP. The approximate version, that is ADACT-H-A, the algorithm referred to in Theorem 8, does not require Assumption 1. We will clarify this fact in the camera-ready version. However, integrating the two phases into a single learning algorithm is one of the main future directions we had also identified, which we will include in the final version. This will allow us to propagate low-value estimates from the RL side into the automata learning algorithm. 3. We will clarify the differences with respect to the mentioned references in the final version. To briefly highlight the main observations here, we can comment for [10] and [27] separately. - The paper [27] addresses online RL of RDPs in a discounted, infinite-horizon setting. Importantly, this algorithm uses the uniform policy for learning. So, the algorithm might be adapted to our setting, only under the assumption that the behaviour policy is uniform. Even in this case, such adaptation would imply sample efficiency bounds that are much looser than the ones we show here. In particular, the bound would roughly scale according to $O^{15}$ and $\epsilon^{-10}$. - The paper [10] is an important reference for the automata learning algorithm that we adapted to the episodic setting. Specifically for this part, we compare the upper bound at the bottom of page 7, and our upper bound in equation (4). Improving over known bounds for automata learning is necessary in order to obtain tight bounds for RL so as to identify the true sources of complexity and hence obtain a better understanding of the problem overall. --- Rebuttal Comment 1.1: Comment: Thanks for authors' clarification to my concerns and further explanations. I will keep my score as weak accept.
null
null
null
null
null
null
Convex and Non-convex Optimization Under Generalized Smoothness
Accept (spotlight)
Summary: This submission claims to relax the Lipschitz assumption on the gradient of the objective function in nonconvex optimisation, and obtains convergence bounds similar to textbook convergence results. Strengths: The presentation of the material is clear and the narrative flows well. Weaknesses: The result presented in this manuscript is not of importance. When we take the gradient information about the initialisation point into consideration, we can correspondingly restrict the feasibility set to the neighbourhood around the initialisation point, and then the $(r,l)$-smoothness defined in this paper can be replaced by the classical Lipschitz condition. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: In equation (5), why is there a $-1$ in the definition of $S_{\textrm{rect}}$? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 2 fair Contribution: 1 poor Limitations: As there is no novel discovery in this submission, the limitation discussion is not applicable. Flag For Ethics Review: ['No ethics review needed.'] Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to thank the Reviewer for the comments. However, we do NOT think there is any simple way to restrict the feasibility set to the neighborhood around the initialization point based on gradient information around the initialization, as claimed by the reviewer. In particular, we want to clarify the following points regarding the challenges of restricting the feasibility set to some neighobood: - First, stationary points may be very far away from the initialization. For example, consider the function $f=\exp(-x)$ with domain $\mathbb{R}$, whose stationary point is at infinity. If you restrict the feasible set within a neighborhood of initialization, you will only converge to a sub-optimal point, which makes no sense. - For another example, consider the function $f(x)=1/x+1/(1-x)$ with domain $(0,1)$. Suppose the initialization is close to $0$, it is hard to make sure the neighborhood contains the optimal point at $1/2$ and does not go outside of the domain, given that the initialization point is closer to the boundary than the optimal point. Although it might be possible to find such a neighborhood for this simple convex and one-dimensional example (e.g. by cheating), we believe it is hard for a non-convex and high-dimensional function. - We also want to point out that the boundary of the domain could be non-convex and not known to the algorithm, which makes things even harder. So you may end up getting a non-convex neighborhood in order to contain stationary points, which does not really reduce the problem to the classical smoothness condition, since classical analysis requires a convex feasible set. Based on the discussions above, we respectfully disagree with the reviewers' suggestion that our results are not novel or important. Regarding the question on equation (5), the $-1$ is there because we do not include the two end points $\tau$ and $\tau_{1/2}$ when defining $S_{\text{rect}}$, in other words, $S_{\text{rect}}:=\sum_{\tau_{1/2}<t<\tau} (G/2)^2=(G/2)^2(\tau-\tau_{1/2}-1)$
Summary: Relaxed smoothness conditions have been introduced to study the gradient clipping algorithm, and show that clipping in particular allows fixed step-size convergence without smoothness under this relaxed assumption. This paper further generalizes the relaxed smoothness notion used for clipping by bounding the Hessian by any non-decreasing function of the gradient norm. Then, fixed step-size convergence is shown under the relaxed smoothness assumption in a variety of settings. The key technical part is to show that assuming a large enough initial bound on the gradients and relaxed smoothness, the gradients remain bounded throughout the trajectory regardless of the setting (convex, strongly convex, non-convex, and even stochastic with some necessary caveats). Then, boundedness of the gradients along the trajectory implies the regular smoothness condition, and thus standard analyses can be used to show convergence of gradient descent. Strengths: - Shows convergence of GD, NAG and SGD for relaxed smoothness conditions, which were not known without gradient clipping. This allows to prove convergence of GD for general classes of functions. - Show boundedness of the gradients along trajectories under relaxed smoothness conditions. This is a nice-to-have technical result, especially for NAG. Weaknesses: - Fixed step-size rates seem quite pessimistic in this case, since step-sizes all along the trajectory depend on the initial bound on smoothness, which might be very large. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: 1) The main argument is that global bounds hold on the gradients. Although it is good to have, this potentially leads to very small step-sizes even in regions for which gradients are small. Would it be possible to improve those bounds, and show that the gradients actually decrease along the trajectories (e.g., in the strongly convex setting), in order to allow for increasingly large step-sizes? If so, what would the decrease rate be? 2) How do results in the paper compare with simple clipped (S)GD with clipping threshold set to the gradient norm at $x_0$ (basically, enforce boundedness instead of showing it) ? Convergence rates seem equivalent but additional terms seem different in each case. Could you clarify that? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: The authors have addressed the limitations of their work adequately. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the positive comments! We will try to address the concerns and questions below. 1. Regarding the stepsize, we agree that using an adaptive stepsize (e.g. gradient clipping technique, which essentially uses a larger step size when gradients are small) may accelerate convergence. However, only constant-factor acceleration is possible because the convergence rates in our paper are already optimal up to constant factors. This may matter in practice but is not important for theoretical analysis. Actually, our analysis should directly apply to most methods with an adaptive stepsize, at least in the deterministic setting. We consider using a constant stepsize for the following reasons: - There are already a lot of papers studying methods with an adaptive stepsize for $(L_0,L_1)$-smooth functions. We are the first to study the classical methods with a constant stepsize and our results show that adaptivity is not necessary for generalized smooth functions. - In the stochastic setting, using a constant step size allows the relaxation of the noise assumption from the assumption of bounded noise in existing papers to that of bounded variance in our paper. - For NAG, using adaptive stepsize may lead to a very sophisticated algorithm, which is hard to implement. 2. We believe that the simple clipped (S)GD with clipping threshold set to the gradient norm at $x_0$ should also converge with the same rate to constant-stepsize (S)GD, up to constant factors. We believe the convergence can be shown using our analysis. In the convex setting, they are actually equivalent since the gradient norm is non-increasing by Lemma 4.1. In the non-convex setting, it is hard and not very important to theoretically compare different constant factors in their convergence speeds, since we have to also obtain very precise complexity lower bounds for them.
Summary: This paper generalizes the recently introduced (L0,L1)-smoothness which itself extends the L-smoothness which is key in analyzing rates of convergence of optimization algorithms. The authors introduce the concept of $\ell$-smoothness where $\ell$ is a function of the gradient of the function to minimize (e.g.: a polynomial). The key idea is to show that under the $\ell$-smoothness assumption, the gradient remains bounded along the iterates of a given algorithm. The authors focus on smooth optimization and tackle both convex and non-convex settings, in which they (roughly) recover the bounds already known for L-smooth functions and show whether they hold or not with generalized smoothness depending on the function $\ell$. Strengths: The paper is well written, and the presentation is clear. The mathematical statements are rigorous and I did not spot mistakes. The paper builds on existing works on (L0-L1)-smoothness but significantly extends them and covers many important settings (convex/non-convex, etc.). Additionally, I found the reasoning interesting and the proof techniques used depart from the classical ones (especially in the non-convex setting). Overall, I think that this paper brings a significant contribution to the important question of the convergence of optimization algorithms. My general feeling is very positive. Weaknesses: On the theoretical side I think that there are some minor bugs (see questions below), but nothing really problematic. However, while the paper has extensive theoretical results, it does not provide any numerical experiments, which is important for machine learning and optimization papers. It would have been, for example, very informative to see how the theory allows making GD and NAG converging beyond Lipschitz assumption by using the author's step-sizes conditions. The related work section seems too short to me given that gradient-based optimization is a very active topic of research. For example the authors call many results as "well known" where instead credit could have been given. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: 1) Line 106 on the fact that f has to tend to infinity. First it might be good to specify +infinity (instead of infinity) to remove any ambiguity. Then I think that the statement is not true in its current form since R^d is an open set, yet there, the epigraph of f is closed without coercivity assumptions (as the authors discuss after). I think that the discussion is valid for *bounded* open domains. Could you clarify/correct this? 2) Line 157. Following my previous question, the authors motivate the use of a domain X so as to include functions like logarithms and rational functions. However it seems that these functions are not closed (eg: log(x) when x -> 0), so Assumption 2 does not hold. Could you comment on that? 3) Line 76-78. In the paper, and notably at these lines, there is a confusion between NAG which is indeed optimal for convex functions, and Heavy Ball with Friction (HBF). Indeed, NAG while being similar to HBF is not optimal in the strongly convex setting but HBF is. This should be clarified. (l76-78) 4) On Assumption 3 (existence of a minimizer). While being reasonable this makes the studied framework more restrictive than the classical one (where the optimal rate is also valid for functions whose minimizer is at infinity). Therefore the results are not valid for the whole class of convex functions (but a very large subset of it), this should be a little more emphasized while discussing the contributions so as not to mislead the reader in the introduction. 5) It would be better to define rigorously "sub-quadratic", in particular, does it include quadratic functions or is it strictly sub-quadratic? 6) Do the authors have any idea on whether their modified NAG (Alg. 1) might improve NAG even when used on L-smooth functions? (even though NAG is already theoretically optimal). If not, is it worse than the vanilla version or does it behave similarly? Numerical experiments would be insightful here too. 7) Typos: l76: strong convex -> strongly convex l207: line 6 of the algorithm -> Is it not rather line 4 of the algorithm? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 4 excellent Limitations: Some limitations are discussed, in particular those related to NAG in the non-convex setting. The limitations about existence of a minimizer might be further discussed (see questions). Societal impact is not really applicable here. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the positive feedback! We will add some experimental results to support our theories in the revision. We will also consider adding more related works. Below we try to address your questions. 1. First, we do mean positive infinity in Line 106, and thank you for pointing it out. We do not quite understand why the argument "since $\mathbb{R}^d$ is an open set, yet there, the epigraph of f is closed without coercivity assumptions" suggests our statement is not true. We believe it is consistent with our statement. Note that $\mathbb{R}^d$ does not have boundary, and thus in this case, we do NOT require $f$ to go to infinity when $x$ goes to infinity. Please let us know if we misunderstood your question. 2. We want to clarify that, although Assumptions 1 and 2 are necessary for our convergence analysis, the definitions of $\ell$-smooth or $(r,\ell)$-smooth functions themselves are independent of these two assumptions. So the examples in this section do not need to satisfy these two assumptions. Of course, we are more interested in examples satisfying them. So the logarithmic function here refers to $-\log(x)$ and rational functions refer mostly to those satisfying these two assumptions. 3. Thank you for the suggestion! Could you elaborate on why NAG is not optimal for strongly convex functions, given that its complexity is $\sqrt{\kappa}\log(1/\epsilon)$? 4. Our analysis does directly apply to the case where the optimal point does not exist. We make this assumption just to present the convergence result in terms of the distance between the iterate and the optimal point, so that it looks similar to the most classical textbook ones. We will make it clear in the revision. 5. We mean strictly sub-quadratic functions, which means $\lim_{u\to\infty}\ell(u)/u^2=0$, and will make it clear in the revision. 6. Our modified version of NAG has the same rate as the classical one for $L$-smooth functions. The modification is minor and for technical convenience. We are not sure whether the modification is necessary for generalized smooth functions. We will add some empirical results based on the suggestions. 7. Thank you for pointing out the typos, and we will fix them in the revision.
Summary: This paper introduces a new assumption generalizing classical smoothness, and named $\ell$-smoothness, motivates it by giving providing examples of non-smooth functions belonging to this class, and studies classical algorithms under this assumption. Strengths: - The paper is clear and fairly compared to related work. - The new class is well defined and motivated, and authors show that we can obtain results under it. Weaknesses: $\underline{\text{General remarks}}$: - First, I would like to make a somewhat subjective statement about the class: in my humble opinion, it seems a bit flawed as it does not respect fundamental homogeneity properties. Let me explain: Let $f\in\mathcal{F}_{\ell}$ the class of $\ell$-smooth functions. For $\alpha, \lambda > 0$, define $g$ as $g(x)=\frac{\alpha}{\lambda^2}f(\lambda x)$ (assume wlog that their minimum is in 0, otherwise translate $f$, then create $g$). We verify $\nabla g(x)=\frac{\alpha}{\lambda}\nabla f(\lambda x)$ and $\nabla^2 g(x)=\alpha\nabla^2 f(\lambda x)$, hence $\|\nabla^2 g(x)\| = \|\alpha\nabla^2 f(\lambda x)\| \leq \alpha \ell( \|\nabla f(\lambda x)\|) = \alpha \ell( \frac{\lambda}{\alpha}\|\nabla g(x)\|)$. Finally, $g\in\mathcal{F}_{\alpha \ell(\frac{\lambda}{\alpha} .)}$. Let us assume that after some analysis of an algorithm like GD (this reasoning also applies to momentum), we find out that the step-size that achieves the best worst-case guarantee on the class $\mathcal{F}_{\ell}$ is $\gamma_{\ell}$. Then, if I need to optimize $f$, I will use $x_{t+1} = x_t - \gamma_{\ell} \nabla f (x_t)$. Now, instead I minimize $g$, I will use $x_{t+1} = x_t - \gamma_{\alpha \ell(\frac{\lambda}{\alpha} .)} \nabla g (x_t) = x_t - \gamma_{\alpha \ell(\frac{\lambda}{\alpha} .)} \frac{\alpha}{\lambda}\nabla f(\lambda x_t)$. Note that $f$ and $g$ have the same minimum and if we introduce the iterates $y_t = \lambda x_t$, we have $x_{t+1} = x_t - \gamma_{\ell} \nabla f (x_t)$ when minimizing $f$, and $y_{t+1} = y_t - \gamma_{\alpha \ell(\frac{\lambda}{\alpha} .)} \alpha\nabla f(y_t)$ when minimizing $g$. In short, we need to have $\gamma_{\ell} = \alpha \gamma_{\alpha \ell(\frac{\lambda}{\alpha} .)}$, or again $ \gamma_{\alpha \ell(\frac{\lambda}{\alpha} .)} = \frac{\gamma_{\ell}}{\alpha}$. Indeed, they both optimize their dynamics. This shows that $\lambda$ has no impact on the optimal way to tune an algorithm. And we can stretch the function $\ell$ as much as we want and notice that probably only $\ell(0)$ matters, i.e. the smoothness constant in the optimum. In particular, applied to the case where $\ell(x) = L_0 + L_1x$, it is clear that the optimal parameter cannot depends on $L_1$. This observation is explained by the lack of homogeneity in the formula: when scaling a function, only $L_0$ is scaled, not $L_1.$ In $L$-smooth class, the worst-case function generally do not belong to the $L-\varepsilon$-smooth class. Hence there is a hierarchy of the classes, but here, it seems that by scaling a function, the worst-case dynamics does not depend on some part of the class definition, leading to useless specification. This point can be further discussed. This is just some thoughts I had based on the class definition which I never used and I am aware from the related works section that some people are working with it. Thus I would be happy if authors could discuss this point. However, based on this remark, my first guess was that the analysis would basically lead as the same analysis as when $\ell$ is constant, or $L_1=0$, which leads to my next point. - Second, we indeed recover classical analyses everywhere in this paper. There indeed is an additional argument, that is the sequence of iterate stays in a compact and by regularity of $f$, we can bound the gradients, hence the hessians and conclude with all the classical analyses. Indeed: - Th4.2 and 4.3 use classical Lyap analyses. - Lemma 4.1:
Using cocoercivity, authors prove $\|\nabla f(x_t)\|$ is decreasing. 
Using the same proof, one could have that $\|x_t-x_\star\|$ is decreasing as well. And using descent lemma, we have that $f(x_t)$ is also decreasing. Finally, we can prove the descent lemma without assumption of $\ell$-smoothness by just assuming the continuity of the hessian, and taking G as upper bound of $\lbrace \|\nabla^2 f(x)\| | x\in\mathcal{X} \text{ and } f(x)\leq f(x_0) \rbrace$ which is assumed to be a compact since authors assume that the function tends to infinity on the border of $\mathcal{X}$. We have the revisited descent lemma: $f(x_{t+1}) \leq f(x_t) - \eta\|\nabla f(x_t)\|^2 + \frac{1}{2}\eta^2 G \|\nabla f(x_t)\|^2$, and by taking $\eta$ sufficiently small, we insure in 1 calculus that - 1) $f(x_t)$ is decreasing and that all the hessians keep being smaller than $G$. - 2) the squared gradients are sommable, hence the classical complexity $O(1/\varepsilon^2)$ that is as in thm5.1, obtained in a much simpler way.
 Of course, here we only assumed the hessian to be bounded, so $\ell$-smoothness + bounded gradients do the job. In conclusion, in my opinion, most of the results are almost straightforward from what is known in the literature. - Third, in my opinion, the stochastic assumption A4 is too strong. I am aware authors claim that some works in the literature assume even stronger assumption, but A4 is way too strong: no multiplicative noise, only additive. This paper claims generalizing smoothness, but linear regressions with MSE losses are not even covered by the section 5.2. 
Plus, bounded variance are too easy to handle in general and leads to an analysis close the deterministic case.
 Instead, authors should consider expected smoothness (or its equivalent in $\ell$-smoothness). Actually, under the assumption that there is a finite number of functions under consideration, I would guess we could generalize the same arguments as in the deterministic case to ensure all the hessians to be bounded on the optimization iterates and basically use classical SGD proof in the smooth case. $~$ $\underline{\text{Minor}}$: - Clarity: - Assumption 1: First recall definition of « closed function » - Also define « sub-quadratic » - l.468: It took me a while to find where this proof was. Please do as for other propositions: state it right before the proof. In a general way, please state all theorems right before their proofs if reported in appendix. Use the « restatable » latex package too avoid renumbering. - Proof of lemma B1: From convexity and local smoothness, authors apply the $\underline{\text{descent lemma}}$ on the $\underline{\text{Bregman divergence}}$ of the objective function to obtain local $\underline{\text{cocoercivity}}$. The proof is the same as for global smoothness. Yet, for completness and care of the neighborhood needed to obtain the cocoercivity, I understand that the proof is provided. However, It needs a reference to this classical result in the global smooth case and mention of the 3 underlined terms when used. - l.550: Cocoercivity - l.553: Bregman divergences - l.562: Descent Lemma - l.560: please introduce y: « let y [as in the lemma statement] » - l.248: « Theorem 5.1 gives the classical $O(1/T)$ rate, or $O(1/\varepsilon^2)$ gradient complexity » -> $O(1/\sqrt{T})$. Authors do not clearly state if they are talking here about gradient norm or its square, but they need to be consistent when talking about rate and complexity. - Typos: - l.2: "Lipshitzness" -> Lipschitz continuity. - l.76: "strong convex" -> strongly convex - l.79: $\nabla$ is missing - l.144: « $x = x_2 = x_t$ » ? I guess "$x$" needs to be removed - l.169: « accelearted » -> accelerated - l.570: 1/L -> 2/L - Misc: - Table 1: missing hline + be precise on the meaning of « - ». - Table 1: for GD non convex: « Inf or $\Omega$ … » could be summed up as «  $\Omega$ … ». - l.121-122: Put 2) under 1). It should not exceed the current number of lines. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: Can authors discuss my first point? Also, can they try to study SGD under expected smoothness assumption? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: NA Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would to thank the reviewer for the insightful thoughts and comments! Below we will clarify the three points in the review. **1. Regarding the first point**, the reviewer made a very interesting reasoning regarding the optimal worse-case step-size $\gamma_{\ell}$, as defined in the comment. However, we want to point out that, for our $\ell$-smooth functions, **the stepsize has to depend on the initialization point**, in addition to $\ell$. Otherwise, let us consider the simple convex function $1/x+1/(1-x)$ with domain $(0,1)$. For any positive stepsize independent of the initial point, we can always find some initialization point $x_0$ close enough to $0$, whose gradient is large enough so that after one step of gradient descent, $x_1>1$ goes outside of the domain. Let the optimal worse-case stepsize be $\gamma_{\ell, \\|\nabla f(x_0)\\|}$ which depends on both $\ell$ and $\\|\nabla f(x_0)\\|$, as in all our theorems. Following your reasoning, we can obtain something like $\gamma_{\ell, \\|\nabla f(x_0)\\|}=\alpha \gamma_{\alpha \ell(\frac{\lambda}{\alpha}\cdot), \\|\nabla g(x_0/\lambda)\\|}=\alpha \gamma_{\alpha \ell(\frac{\lambda}{\alpha}\cdot), \frac{\alpha}{\lambda}\\|\nabla f(x_0)\\|}$ (Note that the initial point for minimizing $g$ is re-scaled to $x_0/\lambda$). From this equation, we can see that $\lambda$ indeed matters. In fact, this equation is very consistent with our step-size choice in Theorem 4.2: $\gamma_{\ell, \\|\nabla f(x_0)\\|}\approx \frac{1}{\ell(\\|\nabla f(x_0)\\|)}$ (which we believe is also worse-case optimal). With this choice, both sides of the equation are the same. For the case where $\ell(x)=L_0+L_1x$, the step-size is $\frac{1}{L_0+L_1\\|\nabla f(x_0)\\|}$, which does depend on $L_1$. Therefore, the worse-case dynamics do depend on $L_1$ and there is indeed a hierarchy in our function classes. Regarding your point about lack of homogeneity, it is true that when you scale $f$ to $g=\alpha f$, the constant $L_1$ does not change much. However, that only applies to the specific setting where $\ell$ is linear. For example, consider the function $f(x)=1/x$ which is $(3/2, 0, 2)$-smooth per Definition 3 in our paper. After scaling it to $\alpha/x$, it becomes $(3/2, 0, 2/\sqrt{\alpha})$-smooth, which means the constant $L_{3/2}$ changes from $2$ to $2/\sqrt{\alpha}$. **2. Regarding the second point**, we want to clarify the following several points in your argument - (1) The set $\mathcal{S}:=\\{x\in\mathcal{X}, f(x)\le f(x_0)\\}$ is NOT compact. We only assume $f$ is closed (the definition of a closed function is that all of its sub-level sets are closed). So $\mathcal{S}$ is not necessarily bounded (thus not necessarily compact as well). Consider the closed function $f(x)=\exp(-x)$ with domain $\mathbb{R}$ which satisfies all our assumptions. Clearly $\mathcal{S}$ is not compact for this example and the iterates also go to infinity. Since $\mathcal{S}$ is not compact, it is not so straight-forward to bound gradient norm or Hessian within it. - (2) The assumption of continuity of Hessian is not necessarily weaker than $\ell$-smoothness, for the latter Hessian may not exist over some points (with zero measure). For example, our example function used to show the lower bound in Theorem 5.3 has some points where Hessian does not exist. - (3) However, if assuming $\ell$-smoothness with a sub-quadratic $\ell$, we can indeed bound gradient norm (and thus also Hessian) within it using our Lemma 4.5. However, we want to point out that, Lemma 4.5 is derived based on Proposition 3.3 (the equivalence between $\ell$-smoothness and $(r,\ell)$-smoothness), which is indeed nontrivial and one of our novel contributions. This approach does offer a different way to proving convergence of GD. We were actually aware of this alternative approach and already mentioned it briefly in Section 5.3 (Lines 326--333) - (4) However, this approach only works easily for the simple algorithm GD, not for NAG or SGD. Because for NAG, SGD, and potentially other more complicated algorithms, the function values are not really decreasing. For SGD, iterates can easily escape from $\mathcal{S}$ due to its heavy-tailed noise. For NAG, although we can easily bound $f(x_t)$, what we really need is a bound on $f(y_t)$, which turns out to be quite challenging. **3. Regarding the third point**, although our bounded noise assumption does not apply to linear regression with MSE loss, we think it is the most standard assumption in the optimization literature and is interesting and challenging enough. Noise with bounded variance is actually heavy-tailed and SGD with such noise is much more challenging than deterministic GD. For example, consider the function $f(x)=1/x+1/(1-x)$ with a bounded domain $(0,1)$. With noise, the iterates may easily go outside of the domain. Indeed, if you keep running, it will go outside with probability $1$. So the key is to show that with high probability, it converges before going outside. Also, since it is heavy-tailed, applying a naive union bound does not work and we feel it is necessary to use a stopping time analysis (and optional stopping theorem) which is novel compared with classical one-step analysis. We thank the reviewer for pointing out the expected smoothness condition. We feel it should be doable and there are actually recent papers studying this condition for $(L_0,L_1)$-smooth function and variance reduction methods (see [Reisizadeh et al., 2023] cited in our paper). However, it is not necessarily weaker than our bounded variance assumption, and we have reservations about whether the former is more interesting or challenging than the latter. Finally, we thank the reviewer for suggestions regarding the writing, typos, etc., and will update them accordingly in the revision. --- Rebuttal Comment 1.1: Title: Response to rebuttal Comment: First of all, I thank the authors for their very detailed answer. 1. I agree with their response, since finally what really matters to tune the parameters is essentially the Lipschitz constant that holds onto the trajectory of the algorithm. And indeed, this reasoning is consistent with their parameter choice of Thm 4.2, which is a good point. 2. I also agree that my reasoning was quick, handy made, and skipping some technical difficulties. I agree that some statements made by the authors like Lemma 4.5 is based on the non trivial Proposition 3.3. Note it does not mean this was necessary and I would be surprised that there is no simpler proof. I agree that analysis for non monotone methods like NAG are less easy to handle. Typically ||\nabla f|| should be replaced by some Lyapunov that works for the given algorithm. Typically, I would expect a proof that would be based on the following: - Under L-smoothness, we know Lyapunov function $V^{L}$ (I make the dependency in L as this is what will change under $\ell$-smoothness). Of course, $V^{L}$ typically depends on the iterate distance to optimum, on the function value and on the gradient norm. - We show some result like $\|\nabla f(x)\|^2 \leq V^L(x)$. $V^L$ often has a composant $\|\nabla f(x)\|^2$ (when well chosen NAG's Lyapunov has one) which makes the inequality trivial. Otherwise, some more computation may be needed for instance to bound $\|\nabla f(x)\|^2$ by $f(x) - f_\star$ (might be hard in this case). - Then, since the hessian is bounded by a function of the gradient norm, i.e. by a functiton of the lyapunov, we might be able to bound the hessian, which tunes the algorithm under smoothness (the difficulty lies in the fact that the lyapunov depends on the tuning, which might bring a constraint on the parameter setting that is not so easy to verify) and since the Lyapunov is decreasing assuming smoothness, we keep the same. smoothness later on. Of course, I agree this reasoning is again an intuition that needs more formalization. Also, I acknowledge that, as for now, I do not have a proper better way to propose to solve the problem this paper address. Then, I will not stand against acceptance for the only reason that I think there might be much simpler ways to do it. And since I do not have other reason to reject this paper that addresses a new problem with theoretical guarantees, I will upgrade my score up to weak accept. - I will conclude on the third point. I still think that this assumption on the noise should be banned from optimization, especially in paper that introduce a new class for purpose of generalization, a class that contains quadratics and which stochastic assumption exclude linear regression. As of the difficulty, I indeed thought of non constrained problem when saying that, which only adds up a constant at each step and the residus is used to tune the algorithm, making analysis more than easy. But indeed, in this case, I understand the need for stopping times. So it might indeed be more technical than I claimed, but still not very interesting in my opinion. In any case, this is subjective, and I based my score on the deterministic contribution. I still consider that an expected-smoothness like assumption should be preferred. I don't see it used in the paper authors mentioned. And I would say it is actually a weaker assumption than the one authors used. Indeed expected smoothness is directly implied, in the smooth setting, by the existence of a variance of the noise on the optimal point only. A generalization of this idea in the $\ell$-smooth setting would surely be weaker that assuming existence and uniform bound on the variance of all the gradients. Best regards.
null
NeurIPS_2023_submissions_huggingface
2,023
Summary: This paper generalizes the classic Lipschtiz smooth gradient condition, as well as a recent improvement. The proposed condition is essentially saying the Hessian norm is bounded by a non-decreasing function of the norm of gradient. Using such conditions, the authors proved convergence rates of gradient descent under various settings (convex, strongly convex, non-convex, stochastic). Classic optimal rates are recovered. Strengths: The paper is well presented. The technical contribution looks original and significant, as it is a generalization of the classic smoothness. Intuitions are shared for proving the convergence. Weaknesses: Overall, I do not have major concerns. 1. The proof of Theorem 5.1, in particular the split of two time steps in (4) seems delicate and interesting, but also a bit out of blue to me. Is this splitting novel, or did part of it show up in the literature? What motivates such a splitting? 2. Do you have examples of objective functions that show up in machine learning or other applications, such that they satisfy the generalized smoothness, but not the conventional smoothness? This would add much significance to the paper. 3. It was not clear to me when we assume the function to be C^2 and when not. In the introduction, It appears at first sight that only twice differentiable functions are considered. Then I saw definition 2, so it is not true. Then I got confused again at Proposition 3.3: it seems that for l-smooth to hold you need f to be C^2. Did you assume it somewhere? 4. Line 27: “provide a lower bound”. Do you mean on the number of iterations or on the error, or both? 5. Table 1: Why is it called gradient complexity? Is it the number of times the gradient oracle needs to be queried? I thought a more common name is iteration complexity, but maybe I am wrong. 6. Lines 46-48: “if gradients along the trajectory are bounded by a constant G, then the Hessian norms are bounded by the constant ℓ(G).” Why is this “if-then” true? Say we look at a univariate objective function f: R -> R. Its gradient g is also univariate. Are you saying that if g is bounded by a constant G, then |nabla g| is also bounded? This can not be true, since nabla g can be arbitrarily large. 7. Line 77: “condition number” of what? 8. Proposition 3.6: “If” -> “Suppose” Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: See above Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 4 excellent Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the positive comments! Below we will try to address the questions of the reviewer. 1. Regarding the proof of Theorem 5.1, this splitting is indeed novel and we are not aware of any previous optimization analysis using similar techniques. Let us briefly talk about the motivation for this approach below. First, it should be natural to consider $\tau$ since our contradiction hypothesis is equivalent to $\tau<\infty$. Then we know that before $\tau$, everything is well bounded, e.g., gradients and Hessian. Then informally speaking, classical analysis directly gives an upper bound of $S_{\text{uc}}$, the area under the curve in Figure 1. However, since the gradient norm at $\tau$ is very large by definition, if we assume there are no abrupt changes in the curve (which is true for a small enough step size), then in some neighborhood around and before $\tau$, the gradient should stay large. Then a lower bound on the size of such a neighborhood essentially gives you a lower bound on $S_{\text{uc}}$, and potentially leads to a contradiction. So we introduce $\tau_{1/2}$ just to rigorously define the neighborhood. 2. Regarding examples in machine learning applications, we think the empirical findings in the papers that study $(L_0,L_1)$-smooth functions (e.g. [Zhange et al., 2019] and [Wang et al., 2022]) can also be used to motivate our more general $\ell$-smooth functions. They observe that when you train a language model, along the trajectory of certain optimizers, log(Hessian norm) is roughly a linear function of log(gradient norm), (i.e., Hessian norm is roughly a polynomial function of gradient norm). So we think our $\ell$-smooth function class can better capture this property. 3. This is a good question. We NEVER assume twice differentiability in this paper. Definition 1 only assumes twice differentiability **almost everywhere**, which means there may be a measure-zero set where Hessian does not exist. Our Proposition 3.3 is rigorous. When proving the direction from Definition 2 to Definition 1, we rigorously show that any $(r,\ell)$-smooth function defined in Definition 2 is twice differentiable almost everywhere. This uses the Radermacher Theorem (which states that any Lipschitz function is differentiable almost everywhere) and a covering argument (see Lines 509-517 in the appendix). 4. Here we mean a lower bound on the iteration or gradient complexity. We will make it clear in the revision. 5. Yes, gradient complexity is exactly the number of times the gradient oracle needs to be queried, i.e., the oracle complexity when the oracle is a gradient oracle. For our theorems, it is equivalent to iteration complexity because each time we exactly query the oracle once. We think it is as common as iteration complexity, we use it because existing lower bounds are usually presented in terms of oracle complexities [Carmon et al., 2017, Arjevani et al., 2019]. 6. This argument is a direct consequence of Definition 1, which bounds Hessian using gradient norm. 7. Here we mean the condition number of the objective function. It is defined as $L/\mu$ for $L$-smooth and $\mu$-strongly-convex functions. 8. Thank you for pointing out the typo. We will fix it in the revision. --- Rebuttal Comment 1.1: Comment: Thanks for the response! My questions are well addressed. I upgraded my rating.
null
null
null
null
null
null
No Change, No Gain: Empowering Graph Neural Networks with Expected Model Change Maximization for Active Learning
Accept (spotlight)
Summary: This work proposes a novel active learning method for graph neural networks, extending the classic expected model change maximization method in the general active learning setting. The proposed method starts from a new Bayesian interpretation of the SGC model and unifies the training process of GNN models, including both forward and backward passes. Then the authors analyze the challenges of applying EMCM on graphs and use some approximation techniques for efficient computation. Finally, the connection between the proposed method and the expected error minimization method is revealed from the theoretical aspect. Strengths: 1. Originality: The paper is novel in general, with new EMCM method adaption on graphs and new GNN training understanding from the Bayesian view. The paper is the first work to extend the classic expected model change maximization method in general active learning literature to graph-structured data. The new unified Bayesian view of the GNN training process is also interesting and enlightening to the field. Previous works only consider the feedforward pass of the training when trying to unify the GNNsf from the optimization perspective. 2. Quality: The quality of this work is above average, with a clear motivation and an interesting theoretical analysis of the method. First, the proposed method is strongly motivated by the Bayesian view of the GNNs from the theoretical perspective. Theorem 3.2 reveals the Bayesian interpretation of the general GNN training process, which lays the foundation of the following proposed method. Second, the approximated expected model change in terms of node embeddings can be solved by an elegant closed-form solution in Eq.(11) without re-training the model, which is quite efficient after employing some well-known simple techniques in the spectral graph theory. Third, the paper also theoretically proves that the proposed DOCTOR algorithm will be reduced to the EEM method when some simple assumptions are made, showing that DOCTOR equivalently reduces the expected test error as well, just like the EEM method. Forth, the experiments are convincing, with SOTA baselines (most of them are after 2022) regarding both accuracy and efficiency comparison. 3. Clarity: The reviewer finds the paper easy to follow, perhaps since the reviewer is familiar with the related major references in both active learning and GNNs. The overall presentation of the paper is fairly ok, with clear notations and concise statements of the theorems. The case study in the experiments is also illustrative and supports Theorem 3.5 in a more concrete and empirical manner. 4. Significance: The reviewer thinks this paper makes some contributions to both the active learning community and the GNN community since it connects these two domains and solves a significant problem, that is, how to adapt the classic active learning algorithm to GNNs. This work chooses the expected model change maximization algorithm and solves the potential challenges of extending EMCM on graphs effectively. For researchers in the AL domain, this work provides a new adaption of EMCM specifically designed for GNNs. For researchers in the GNN domain, this work also presents a performance-based AL method for GNNs since most of the current graph active learning literature only focus on uncertainty-based or information-density-based methods without good theoretical interpretation. Weaknesses: There are several potential improvements in this paper. 1. The background knowledge of EMCM method in general active learning can be elaborated. The details behind Eq.(7) should be provided to give readers without active learning backgrounds more context. For example, the intuition behind the design of Eq.(7) can be discussed. 2. For the efficiency comparison, only three small-scale datasets are used. It is suggested to test the running time performance of the proposed method and other baselines on the large-scale dataset like the ogbn dataset. 3. The conclusion part is short. More future work can be added. 4. Some typos should be fixed. For example, in line 219, "embedding s" should be "embeddings". 5. Other comments in questions and limitations. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. The current setting is only designed for node classification tasks. What about link-level and graph-level tasks? The reviewer knows that this question may be out of the scope of this work, but the reviewer is curious about the potential of extending this work to tasks like link predictions. Can we directly use the DOCTOR algorithm? If not, what are the challenges? 2. The designed algorithm uses a simple sampling algorithm when dealing with the batch active learning setting. What are the possibilities of using other more advanced techniques to choose the query nodes in batches? Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The reviewer generally agrees with the limitations discussed in Appendix H. This work currently only supports the batch active learning setting with a simple extended version of the sequential active learning solution. However, it is well-known that such a simple extension ignores the correlations between each query node in one batch, as they may reside within the same cluster in the embedding space. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your insightful review! We are glad to know that you found our paper novel, sound, clear, and significant. We hope these responses will address your concerns appropriately. ## 1. Background knowledge of EMCM method **The background knowledge of EMCM method in general active learning is discussed in Appendix B.3.** We introduce a high-level intuitive interpretation of the EMCM principle in AL (Eq.(7)) here. **The model change is a reasonable indicator for reducing the generalization error for the following two major reasons. First, The generalization capability can be changed if and only if the current model is changed. As a result, it is useless to query the instance that cannot update the current model in AL. Second, The data points significantly changing the current model is expected to produce a faster convergence speed to the true model,** and this is the underlying motivation behind the EMCM framework. ## 2. Running time performance of the proposed method and other baselines on the large-scale dataset In fact, **we compare the running time performance of the proposed method and other baselines in Appendix G.1.2, including one large-scale dataset, Reddit.** Figure 7 shows that our method is both accurate and efficient. **Just like the suggestions by Reviewer bxJi, we will move Figure 7 to the main body for better presentation.** ## 3. Short conclusion **We will add more discussion in the conclusion part with the limitations of our method. The limitations and future works are discussed in Appendix H. We will also make sure there are no typos.** ## 4. Extensions of the proposed method For the extension of the link prediction task of our proposed method, we believe that we cannot directly apply the proposed algorithm because it is challenging to derive the new closed-form solution of the acquisition function on the link-level task since the starting Bayesian interpretation of GNNs only works for the node-level task. We leave the exploration of graph active learning methods for link-level tasks as future work. We also leave the exploration of more advanced batch active learning variants of our proposed method as future work. One possible solution is to incorporate the techniques[1] used in the general batch active learning task for EMCM. Both these directions are very promising extensions of our work. [1] Cai, Wenbin, Muhan Zhang, and Ya Zhang. "Batch mode active learning for regression with expected model change." IEEE transactions on neural networks and learning systems 28.7 (2016): 1668-1681.
Summary: This paper proposes a new active learning strategy based on EMCM principle under the task of graph node-level semi-supervised predictions. The most significant contributions of this paper are 1) extending the EMCM principle to GNNs leading to the MAP estimate correlated (interpretable) acquisition function on graphs, 2) proposing a regularized single-level optimization process to solve the bi-level optimization problem. The comprehensive experiments demonstrate the efficiency and effectiveness of the acquisition function. Strengths: - S1: Great presentations. - S2: Equipping the AL acquisition function with Bayesian interpretability is a promising motivation. - S3: The techniques and explanations are largely sound. - S4: Promising experimental results. Weaknesses: - W1: The name of the method makes no sense: expecteD mOdel Change maximizaTion On gRaphs (DOCTOR). It fails to imply the AL strategy, making people hard to correlate this name with the method. I prefer a clear name: Graph Expected Model Change Maximization (GEMCM) that indicates both the task and method. - W2: Typo in Page 2, line 55: "efficacy and efficacy" - W3: Page 3, line 109: "the nodes are not i.i.d. but linked with edges such that connected nodes tend to have the same label." I can't see the correlation between this challenge and the method. I suggest explaining it with the modeling of graphon. - W4: Line 218: Can you answer what if the hypothetical label is wrong? It can be the case that we have a high probability with a wrong label and a low change, but a low probability with the true label and a high change. It intuitively make no sense. This is a non-trivial limitation and should be mentioned. - W5: Can you discuss to what extent, your approximation is valid? I mean can you bound the difference between your approximation and the expectation? - W6: I expect the authors discussing the connection between the proposed method and strict proper scoring rules instead of only the EEM. I also expect the theoretical comparison between the proposed method and entropy minimization (may be in terms of strictly proper scoring rules). - W7: I'm confused by the reason why the forward propagation in the equation (2) is a constraint, and why this constraint can be transformed in to the optimization problem in Theorem 3.1. I suggest providing intuitive explanation here. - W8: Line 234: Can you adding an explanation for the reason why we maintain the smallest eigenvalues (high-frequency) and corresponding eigenvectors instead of the largest ones? I noticed literature believes that the high-frequency information leads to out-of-distribution problem. Is this belief correlated to your approximations here? Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: My questions are listed in weaknesses. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: $$\textcolor{red}{\text{The detailed rebuttal will be given if needed. Please let us know which point needs further clarification after reading this compact response.}}$$ Thanks for such a brilliant and constructive review! 1. Model name. The new name is a great suggestion due to the implied interpretable meaning. We will change it to GEMCM. 2. Typo. We will change the second "efficacy" to "efficiency". 3. Graphon modeling. In fact, the non-i.i.d challenge is indeed not directly relevant to our proposed method. We include this challenge mainly because it hinders the direct application of general active learning techniques. We do agree that graphons can come into the picture as a mathematical tool to model large and complex graphs, capturing the non-i.i.d. characteristics of graph-structured data, including dependencies or homophily. We will include a discussion on the graphons. 4. Hypothetical label. The hypothetical label is an artificially-assigned temporary label for an unlabeled node or sample when selecting the query node from the unlabeled pool. It is only an introduced term in the look-ahead model for active learning, and it can be assigned as an arbitrary label in the label space. Eq.(7) iterates all possible hypothetical labels $y_k^+$, and gets the corresponding model change $u(\tilde{\Theta}^*) - u(\Theta)^*$, and weights it with the corresponding predictive probability $\mathbb{P}(y_k^+ | k)$. Therefore, whether the hypothetical label is aligned with the actual hidden ground-truth label is not relevant here in the node selection stage, and we only care about how large will the final expected model change be if we add each candidacy node from the unlabeled pool into the next round of training. 5. Bound on approximation error. For the truncated spectral approximation, let us denote that the original Laplacian matrix $\tilde{\mathbf{L}} \in \mathbb{R}^{n \times n}$ can be decomposed as $\tilde{\mathbf{L}} = \bar{\mathbf{V}}\bar{\boldsymbol{\Lambda}}\bar{\mathbf{V}}^T$. We only keep the top-$m$ smallest eigenvalues in $\bar{\boldsymbol{\Lambda}}$ and denote the new matrix as $\boldsymbol{\Lambda} \in \mathbb{R}^{m \times m}$. Accordingly, the corresponding eigenvectors are kept in $\bar{\mathbf{V}}$ so we get $\mathbf{V}\in\mathbb{R}^{n \times m}$. Then we can now project the node embedding $\mathbf{u}$ onto the space spanned by these truncated $m$ eigenvectors instead of all the $n$ eigenvectors. Therefore, the approximation error can be formulated as $\|\mathbf{V}^T\mathbf{u} - \bar{\mathbf{V}}^T\mathbf{u}\|_2$. Assume the node embedding is bounded $\|\mathbf{u}\|_2 \leq C$. We have, $$\|\mathbf{V}^T\mathbf{u} - \bar{\mathbf{V}}^T\mathbf{u}\|_2 = \|(\mathbf{V}^T - \bar{\mathbf{V}}^T)\mathbf{u}\|_2 \leq \|(\mathbf{V}^T - \bar{\mathbf{V}}^T)\|_F \|\mathbf{u}\|_2 \leq C \|(\mathbf{V}^T - \bar{\mathbf{V}}^T)\|_F = C \sqrt{n-m}.$$ For Laplacian approximation, we apply it on top of $\mathbb{P}(\boldsymbol{\alpha} \mid \mathbf{y})$. According to the Bernstein-von Mises Theorem, under some regularity conditions and the prior is positive, bounded and twice differentiable, we have $$\text{sup}_{\boldsymbol{z}} |\mathbb{P}(\boldsymbol{\alpha} \leq \mathbf{z} | \mathbf{y}) - \mathbb{P}(\tilde{\boldsymbol{\alpha}} \leq \mathbf{z})| \stackrel{\text{a.s.}}{\approx} 0.$$ Here, $\boldsymbol{\alpha}$ is the true posterior without closed-form exact solution and $\tilde{\boldsymbol{\alpha}}$ is the mean of the approximated Gaussian distribution via Laplacian approximation in Theorem 3.3. This result shows that our approximation is quite good from the aspect of the gap in terms of the cumulative distribution function. 6. Strict proper scoring rules. Thanks to the generalized active learning acquisition function (Eq.(1) in [1]), the lower bound function $\mathcal{A}' (k)$ of our original acquisition function $\mathcal{A}(k)$ can now be incorporated into this framework. Other methods can also be incorporated into this generalized active learning acquisition function as well. If we set the functional $Q(\mathbb{P}(\boldsymbol{\alpha} | L)) = I(\mathbb{P}(\boldsymbol{\alpha} | L))$ as the mutual information, we can obtain the acquisition function used in [2] that is based on Shannon's entropy. If we set the functional as Eq.(6) in [1], we can get the acquisition function based on the strict proper scoring rules. [1] Tan, Wei, Lan Du, and Wray Buntine. "Diversity enhanced active learning with strictly proper scoring rules." Advances in Neural Information Processing Systems 34 (2021): 10906-10918. 7. Constraints in Eq.(2). In Eq.(2), we first do one forward pass of SGC once, and then try to optimize the resulting training loss on the high level without caring about any further forward passes of SGC. Therefore, this one-time forward pass of the SGC model becomes a constraint in Eq.(2) because when we directly optimize the training loss over $\boldsymbol{\Theta}$, the output $\mathbf{U}(\boldsymbol)$ by the forward pass of SGC must be fixed. Theorem 3.1 focuses on the forward pass of SGC only. Theorem 3.1 provides an optimization perspective of the output (node embeddings) by the forward pass of SGC. Intuitively speaking, when we do forward pass of SGC, we inexplicitly try to minimize the total variations of node embeddings and the node embeddings must be smooth over this graph after the forward pass of SGC model, meaning that the node embeddings for two adjacent nodes should be similar. This corresponds to the homophily assumption of the graphs. 8. Maintain the smallest eigenvalues In fact, maintaining the smallest eigenvalues corresponds to applying a low-pass filter. Therefore, when we only keep the smallest eigenvalues, we actually remove high-frequency noises in the graph so that we can keep the most significant information in this graph. Hence, the approximation is completed in this manner. That is why we maintain the smallest eigenvalues. --- Rebuttal Comment 1.1: Comment: Thank you for your insightful rebuttal. After your explanations, most of my concerns and confusions have been addressed. I'll update my score to from 5 to 7: Accept. Please include all the modifications, explanations, and discussions in your final version. --- Reply to Comment 1.1.1: Comment: Thanks again for your support for our work! We sincerely appreciate your insightful feedback and adjustment in your score. We will definitely keep improving our paper based on all of your constructive suggestions.
Summary: The authors present yet another active learning approach for graph neural networks, claiming it to be novel. They attempt to build upon the classic expected model change maximization method but in a general active learning setting. To do so, they introduce a Bayesian interpretation of the SGC model, which supposedly unifies the training process of GNN models. Some approximation techniques are used, along with some theoretical analysis.  Strengths: 1. The novelty of this paper is ok. It extends the existing literature on general active learning by being the first work to apply EMCM to graphs. The unified Bayesian view also brings some new insights. 2. The method is well-motivated, with an intriguing theoretical analysis. The proposed method is also efficient both from the theoretical aspect and the experimental aspect. 3. The writing is satisfying, with clear logic and notations. I believe this work can benefit researchers from both the GNN domain and the active learning field. Weaknesses: 1. More discussions on the deep connection between the Bayesian interpretations of GNNs and the proposed active learning method DOCTOR should be provided. 2. More background knowledge on the look-ahead models in active learning literature should be provided. 3. More details on the experiments, like the hyperparameters settings, should be provided. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. What are the inner connections between the unified Bayesian interpretations of GNNs and the subsequent proposed active learning method? A more compact explanation is preferred. 2. What are the look-ahead models in Section 3.2.3? The term is frequently used, but little context is provided about this term. Is it the potentially trained model after some labels are given by the oracle? 3. What about some possible extensions of this work? How does the proposed method deal with the noisy oracle? Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: There are no major limitations of this work. However, the reviewer has some open discussions about the extensions. For example, what if the label has noise? In real-world cases, there is no oracle in any application since even human experts can sometimes make mistakes. How can this model be adapted to the noisy oracle case? The reviewer would like to see the authors' thoughts on this question. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your constructive review! We are glad to know that you found our paper novel, well-motivated, and well-written. We hope these responses will address your concerns appropriately. ## 1. Connections between the unified Bayesian interpretations of GNNs and the subsequent proposed active learning method The inner connections between the unified Bayesian interpretations of GNNs and the subsequent proposed active learning method are as follows. Under semi-supervised settings, current GNN models typically lack a Bayesian probabilistic interpretation. But in active learning, Bayesian interpretation lays the groundwork for performance-based methods, where our proposed method belongs. It incorporates prior knowledge and updates the posterior beliefs about the model parameters or predictions, assuming if new labeled data becomes available. Our method bridges this gap. More discussions can be found in Sec. 2.4.1. ## 2.Background knowledge on the look-ahead models The performance-based methods usually consider the look-ahead model. We first hypothetically assume one candidate node $x_+^k$ from the pool of the unlabeled nodes is chosen for query, and its label is revealed as $y_+^k$ by the oracle. Then we analyze the potential influence of this candidate node ($x_+^k, y_+^k$) on the model's parameters or predictions if it is added to the labeled node set for the next round of training. **In fact, more discussions regarding the background knowledge of look-ahead models can be found in Appendix B.3.1.** ## 3. Possible extensions of this work Thanks for your insightful suggestion. One possible extension is to incorporate the case of a noisy oracle. For example, **we may consider integrating the techniques used in [1] to handle label noise in the oracle. We will leave it for future work since it is currently out of the scope of this work.** [1] Zhang, Wentao, et al. "Rim: Reliable influence-based active learning on graphs." Advances in Neural Information Processing Systems 34 (2021): 27978-27990.
Summary: The authors propose an active learning method for GNNs, extending the Expected Model Change Maximization (EMCM) principle to GNNs. A Bayesian interpretation for the node embeddings generated by GNNs under the semi-supervised setting is presented. By establishing a connection with expected prediction error minimization, theoretical guarantees for AL performance are driven. The numerical experiments shows improved performance and runtime compared to the existing approaches. Strengths: The paper is very well-written and clearly motivated. The overall proposed active learning approach makes sense and seems sound (although I haven't check the proof details). I like the proposed approach for deriving Bayesian interpretation for GNNs. There are sufficient experiments to support the claims as well. Weaknesses: I don't see any glaring weakness other than Figure 1 not including the IGP (the closest model to the proposed in terms of performance). Probably better to move Figure 7 in appendix to the main manuscript. A minor comment: some of the equations in the appendix are out of bound. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: - It would be nice to add an ablation study about the approximations made in the model. I am curious to know the effect of number of retained eigenvalues/vectors in Laplacian approximation on the performance of the model and it relation to homophily (from a graph signal processing point of view, retaining smallest eigenvalues correspond to low-pass filtering of the signal over graph). Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: See above sections. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your very thoughtful and constructive review. We appreciate the recognition of our active learning approach and the Bayesian interpretation of GNN training. We are glad to know that you found our paper clear, well-motivated, and supported by sufficient experiments. We would also like to thank you for your suggestions for improvement and have addressed each of your points below. We hope these responses will address your concerns appropriately. ## 1. Figure 1 and Figure 7. We admire your suggestion. **We will shift Figure 7 from Appendix to the main manuscript and probably consider substituting Figure 7 for Figure 1.** In this way, we can better show the accuracy and running time comparison simultaneously in one figure and highlight the efficacy and efficiency of our method. ## 2. Out of Bound Equations in the Appendix. Thank you for bringing this to our attention. **We apologize for this oversight (lines 894,910) and will make sure to reformat the equations**. ## 3. Ablation Study. Your suggestion for adding an ablation study on analyzing the effect of the number of retained eigenvalues/vectors is insightful. In particular, **we actually did a similar sensitivity analysis on the number of retained eigenvalues $m$ in Appendix G.3,** along with the balancing factor $\lambda$. For a clearer presentation, we summarize the case when fixing $\lambda = 5e-4$ with varying numbers of retained eigenvalues $m$ in the following table. More results can be referred to in Appendix G.3. **A larger $m$ usually leads to better performance since the approximation of the node embeddings projection will be more precise, but the improvement may be quite marginal if $m$ is too large with higher computational cost as well.** | #Retained eigenvalues $m$ | 10 | 20 | 50 | 100 | 200 | |:--------------------------:|:----:|:----:|:----:|:----:|:----:| | Accuracy on Cora (%) | 82.2 | 82.7 | 84.0 | 84.3 | 84.4 | | Accuracy on Citeseer (%) | 71.7 | 73.0 | 74.5 | 75.5 | 75.7 | ## 4. Relation to homophily. We highly appreciate your suggestions. Indeed, **we choose to retrain the top-$m$ smallest eigenvalues based on this exact motivation that it corresponds to a low-pass filter, and the graph signal is assumed to be smooth (a.k.a, it is a homophilous graph).** Therefore, when the homophily principle holds and the graph signals are smooth, **applying a low-pass filter by retaining the smallest eigenvalues in the truncated spectral projection could capture most of the important information present in the graph signal while reducing the computational cost significantly.** This makes it a relevant theoretical understanding of GNN models[1][2]. We will discuss more on its relationship with the homophily concept from graph signal processing (GSP) in Section 3.2.2, and illustrate why we choose to retain the smallest eigenvalues during Laplacian approximation more clearly. **For a formal theoretical analysis of the direct relationship between the model performance and the number of retained smallest eigenvalues from the GSP aspect, we leave it for future work since we need many more assumptions on the graph itself**, like how the graph is generated and the how the label is generated from the graph. Nonetheless, the investigation of this direction is very promising and can motivate new design of the active learning methods for GNNs. [1] Ma, Yao, et al. "A unified view on graph neural networks as graph signal denoising." Proceedings of the 30th ACM International Conference on Information & Knowledge Management. 2021 [2] Dong, Xiaowen, et al. "Graph signal processing for machine learning: A review and new perspectives." IEEE Signal processing magazine 37.6 (2020): 117-127. --- Rebuttal Comment 1.1: Comment: Thanks authors for their response addressing my questions. Having read all reviews and responses, I'm keeping my score as is. --- Reply to Comment 1.1.1: Comment: Thanks again for your strong support for our work! We will improve our paper with all of your insightful suggestions.
null
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
On the Convergence of Encoder-only Shallow Transformers
Accept (poster)
Summary: This paper proves the convergence rate of GD on one-layer transformer training. In their setting, the transformer is composed by a single attention layer, followed by relu unit and a fixed fully connected layer. The linear convergence rate is achieved. The proof technique follows the standard neural network proof [Nguyen 2021], where under some conditions of minimum singualr value of gram matrix, they inductively show that the parameter drift is controlled, and hence the GD dynamic is easy to study. The main meat is to show that the minimum singular value of gram matrix is lower bounded. [Nguyen 2021] Nguyen, Quynh. "On the proof of global convergence of gradient descent for deep relu networks with linear widths." In International Conference on Machine Learning, pp. 8056-8062. PMLR, 2021. Strengths: To my best knowledge, this is the first convergence proof of transformer. The rate makes sense and proof roadmap is clear to me. The first part of the proof idea is standard, that under proper conditions for initialization and over-parameterization, the model parameter will stay in linear regime, and hence the convergence is easy to control. Then the second part is that they show that the conditions can be satisfied for some initialization scheme, by showing that the minimum singular value of gram matrix is bounded. The most interesting and novel part is to derive such lower bound. Weaknesses: 1. As I mentioned above, the first part of the proof (proposition 1) pretty much follows [Nguyen 2021]. 2. The model studied is bit toy. In practice, the residual block of transformer is necessary, but it is missing in this paper. 3. They assume the fully connected layer W_H is fixed to identical matrix, and argue that since W_V and W_H are adjacent, this is equavilent to W_H is trainable, However, in GD dynamic, I do not think they are equivalent. For example, in deep linear neural network, training all layer by GD is not equavilent to only training one matrix. Or if they are equivalent, there should be rigorous proof. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. Since original framework in [Nguyen 2021] works when all layers are jointly trained, is it possible to extend your proof to that regime as well? 2. In Corollary 1, you said that when d_s and d is chosen to be some value, the conditions 2 and 3 cannot satisfied, and the convergence will fail. I think this is not a good reasoning. It could be that your framework cannot prove the convergence, but does not mean it cannot actually converge. A more convincing reasoning will be showing the lower bound of the loss, to show the failure of convergence. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: The technical limitations are stated in weakness part. I do not see any negative social impact of this work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We are thankful to the reviewer UvAq for appreciating that this is the first theoretical work on the convergence of Transformers and the insightful feedback. We address the concerns below. --- > **Q1:** [The first part of the proof (proposition 1) pretty much follows [3].] **A1:** The framework of the proof for proposition 1 follows the variant of Polyak-Lojasiewicz (PL) inequality in traditional convergence analysis. But the studied structure is generally more complicated than [3] due to the softmax function, and the proof in Lemma 7-11 tailored for Transformer. More importantly, to give the lower bound for $\alpha_0$ in proposition 1 is another technical difficulty, which was later addressed in Theorem 1 when considering several particular initializations. See the [general response](https://openreview.net/forum?id=8ZveVHfmIE&noteId=hT7qrLWaaA) for detailed clarification about this issue. --- > **Q2:** [The model studied is bit toy. In practice, the residual block of Transformer is necessary, but it is missing in this paper.] **A2:** We are thankful for the suggestion of the reviewer. According to the reviewer’s suggestions, we have already extended our result to a shallow Transformer with residual connections, see [general response](https://openreview.net/forum?id=8ZveVHfmIE&noteId=hT7qrLWaaA). We hope our analysis of the self-attention module (e.g., scaling, softmax) will lay a foundation for the analysis of more general Transformer architectures. --- > **Q3:** [They assume the fully connected layer W_H is fixed to the identical matrix, and argue that since W_V and W_H are adjacent, this is equivalent to W_H is trainable, However, in GD dynamic, I do not think they are equivalent. Since original framework in [3] works when all layers are jointly trained, is it possible to extend your proof to that regime as well?] **A3:** We agree with the reviewer that these two cases are not equivalent under gradient descent. We adopt one learnable parameter for training just for ease of analysis, as mentioned in line 138. The same technique is also used in [1] for analyzing Transformer. According to your suggestions, we will make it clear in our final version by adding the following sentence in line 139. "Note that we mix $W_V$ and $W_H$ together for ease of the analysis but it does not mean its training dynamics is the same as we jointly train these two adjacent matrices." Besides, we think it is possible to extend the proof and we illustrate the high-level idea here. If we consider all layers have been trained. Then we need to consider one more parameter $\boldsymbol{W}_H$. Two main parts of the analysis have to be changed. The first part is Proposition 1. Note that in line 628, $\boldsymbol{f}$ becomes as follows: $\boldsymbol{f}= \tau\_1 \boldsymbol{w}\_O^\top \sum_{i=1}^{d\_s}\sigma\_r \left(\boldsymbol{W}\_H \boldsymbol{W}\_V\boldsymbol{X}^\top \boldsymbol{\beta}\_i\right)$. Then lemma 7- 8, 10, and 11 remain unchanged. In Lemma 9, one more step of triangle inequality is required to decouple $\boldsymbol{W}\_H^{t’}$ and $\boldsymbol{W}\_H^{t}$. In Lemma 12, 13, 14, we need an additional bound of the gradient norm and the Lipschitz constant for $\boldsymbol{W}\_H$ using the same technique as we bound the others. Then the proof for Proposition 1 can be finished. The second part that needs to be changed is the minimum singular value of $\boldsymbol{F}\_{pre}$. One can analyze the centering features w.r.t. $\boldsymbol{W}\_H$ following the strategy in [2] and give a lower bound for the minimum singular value of $\boldsymbol{F}\_{pre}$. --- > **Q4:** [Fail of convergence in Corollary 1.] **A4:** Our current proof provides an upper bound of the loss function value. To show “fail of convergence”, we agree with the reviewer that the lower bound of loss function is needed. This is a quite difficult problem, and accordingly, based on the reviewer’s suggestions, we modify the corollary 1 for rigorous illustration as follows: --- **Corollary** (Convergence of vector input) Considering LeCun initialization with $\tau_0 = d_m^{-1}$ scaling, given vector input $\boldsymbol{x}\in \mathbb{R} ^{\tilde{d}}$, if one feeds the input to Transformer by setting $d_s = 1, d=\tilde{d}$, then training with GD can converge to a global minimum. However, if one sets $d_s =\tilde{d}, d=1$, the conditions in Eq.2-3 do not hold, **its convergence cannot be guranteed by our theory**. --- Though the estimation (lower bound of the loss function) under Transformer is difficult, we find that it is possible to consider a simplified case, i.e., linear regression, which still shares a similar spirit to our problem. --- Consider the loss $\ell = 1/2 || \boldsymbol{y} - \boldsymbol{X} \boldsymbol{w} ||_2^2 $, where $\boldsymbol{X} \in \mathbb{R}^{N \times d}$. In this case, the neural tangent kernel (NTK) is $\boldsymbol{X} \boldsymbol{X}^\top$. If the NTK is a rank-one matrix, then the minimum eigenvalue is zero. In this case one can see $\boldsymbol{X} \in \mathbb{R}^{N \times 1}$. Therefore, if the augmented matrix $[\boldsymbol{X}, \boldsymbol{y}]$ is not rank-one, which is standard in practice, then GD training can not converge to zero loss since there is no solution. --- This would motivate us to rethink Corollary 1 under the Transformer setting as future work. We will add a remark about this in our revised version. --- We hope that our responses have addressed the concerns of the reviewer. If there are any remaining questions, we are happy to discuss them further. --- ### References [1] Yang, et al. "Transformers from an optimization perspective." Neurips 2022. [2] Nguyen, et al. "Tight bounds on the smallest eigenvalue of the neural tangent kernel for deep relu networks." ICML, 2021. [3] Nguyen, et al. "On the proof of global convergence of gradient descent for deep relu networks with linear widths." ICML, 2021. --- Rebuttal Comment 1.1: Title: Thanks for the rebuttal Comment: I would like to thank the authors for the clarification. The results in the paper is timely but given the dependence on previous work, I decide to maintain my score.
Summary: This paper studies transformer networks consisting of one layer, with average pooling and a scalar output. The authors consider encoder type softmax attention (unmasked) and show convergence to a global minimizer of the loss. The results hold for the cases that the temperature/scaling inside the softmax $\tau_0$ is either inversely proportional to the overparameterization of the weight matrices, or the square root of it. The results also hold for different types of initialization He/Le-Cun. Finally, the authors also provide an NTK based analysis. Strengths: To the best of my knowledge this is the first theoretical work on the convergence of transformers and could potentially lead to further investigate the convergence properties of transformers and bring insights into the way they work. 
It is also interesting that the authors study different initialization schemes and provide comparisons between them. Weaknesses: Except for some questions I have (see below), my main concern is the formulation of the transformer model in this analysis. Specifically, 1. The paper does not study sequence to sequence models as encoder transformers normally are. Thus, I find the formulation of outputting a scalar inconsistent with practice. 2. The ReLU layers do not have any matrix after the application of the ReLU. There are also no residual connections. 3. The pooling layer to the best of my knowledge is used only in ViT models. I also think that it would be beneficial for the paper, to include more details about proof techniques for one of the cases (this could be possible by reducing some of the text in the rest of the sections). Technical Quality: 3 good Clarity: 2 fair Questions for Authors: 1. In lines 635,637, do the authors mean the triangle inequality? 2. Why the authors study these two specific scaling schemes of $d_m^{-1/2}$ and $d_m^{-1}$? It would be interesting to have an analysis dependent on the scaling parameter $\tau_0$. 3. It is not very clear also what step-sizes could be permitted or constants $\hat{c}$. To my understanding the step-size needs to be inversely proportional to the number of data samples? It would be great if the authors could include some examples, in which the probability bounds are not vacuous and the order of the step-size chosen. 4. It would be great if the authors could include also the order of the constants $C_Q,C_K,C_V,C_O$ (if they cannot be chosen arbitrarily) or if they could point out the line in which they are defined. 5. In line 640, how is it implied that $\sigma_{min}(F_{pre}) \geq \alpha/2$? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: This work has no potential negative societal impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We are thankful to the reviewer sZxG for appreciating that this is the first theoretical work on the convergence of Transformers and its significance for practice. --- > **Q1:** [No result on sequence to sequence (seq2seq) model. The formulation of outputting a scalar inconsistent with practice. The pooling layer to the best of my knowledge is used only in ViT models. No residual connections.] **A1:** We agree with the reviewer that the current formulation does not consider seq2seq Transformer. Our model formulation as well as theoretical analysis is motivated by ViT models used for image tasks. Besides, according to the reviewer’s suggestions, our result can be extended to a shallow Transformer with residual connection, see [general response](https://openreview.net/forum?id=8ZveVHfmIE&noteId=hT7qrLWaaA). We hope our analysis of the self-attention module (e.g., scaling, softmax) will lay a foundation for the analysis of more general Transformer architectures. --- > **Q2:** [Include more details about proof techniques for one of the cases (this could be possible by reducing some of the text in the rest of the sections).] **A2:** We are thankful to the reviewer for the suggestion, we will add the proof sketch, as present in the [general response](https://openreview.net/forum?id=8ZveVHfmIE&noteId=hT7qrLWaaA), into the final version. --- > **Q3:** [In lines 635,637, do the authors mean the triangle inequality? In line 640, how is it implied that $\sigma_{min}(F_{pre}) \geq \alpha/2$] **A3:** We use Weyl’s inequality, (a general version of triangle inequality) and the relationship between spectral norm and Frobenius norm. To be specific, [lemma] [Weyl’s inequality [6]] Given two matrices $X, Y \in \mathbb{R}^{p\times q}$ with the respective singular value ordered as follows: $\sigma_1(X)\geq\ldots\geq\sigma_r(X)$ and $\sigma_1(Y)\geq\ldots\geq\sigma_r(Y)$, where $r=\min(p,q).$ Then we have $\max_{i\in[r]} |\sigma_i(X)-\sigma_i(Y)|\leq ||X-Y||_2.$ [end lemma] In lines 635 and 637, we choose $i = 1$ in Weyl’s inequality and use the inequality between spectral norm and Frobenius norm to obtain $||X||_2 \le ||Y||_2+ ||X-Y||_2 \le ||Y||_2 + ||X-Y||_F$. Thus, this can be also considered as a triangle inequality for the spectral norm. In line 640, we choose $i = r$ in Weyl’s inequality and use the inequality between spectral norm and Frobenius norm to obtain $\sigma\_{\min}(X) \ge \sigma\_{\min}(Y) - ||X-Y||_2 \ge \sigma\_{\min}(Y) - ||X-Y||_F$. --- > **Q4:** [Why the authors study these two specific scaling schemes of $d_m^{-1/2}$ and $d_m^{-1}$? It would be interesting to have an analysis dependent on the scaling parameter$\tau_0$ .] **A4:** These two scaling schemes are widely used in practice ($d_m^{-1/2}$ in [4,5]), and theoretical analysis for the neural network limiting ($d_m^{-1}$ in [1,2,3,7]). Our proof framework can facilitate the analysis of general $\tau_0$, e.g., $\tau_0 = d_m^{-a}$ with $a \in [0,1]$. --- > **Q5:** [It is not very clear also what step-sizes could be permitted or constants . To my understanding the step-size needs to be inversely proportional to the number of data samples? It would be great if the authors could include some examples, in which the probability bounds are not vacuous and the order of the step-size chosen.] **A5:** The step size is inversely proportional to $N^{1/2}$, where N is the number of data samples. To see why, in line 652, we define $C = c_1 c_2 + 2 c_3 \ell(\theta^0)$. According to line 648 , we can see that $c_1$ is proportional to $\rho$, which is defined as $\rho = N^{1/2} d_s^{3/2} \tau_1 C_x $ in line 170. Therefore, C is proportional to N^{½}. Since in Proposition 1, we require the step-size $\gamma < 1/C$, therefore, the step-size is inversely proportional to $N^{1/2}$. We will add a remark for this in our final version. The probability is non-vacuous since in Lemma 6 we select $\zeta = d_m^{-1/2}$ so that the probability is at least $1-2\exp^{-d_m/2}$. In practice, even for $d_m = 16$, then the probability is at least 0.99$. Larger width leads to a higher probability. --- > **Q6:** [It would be great if the authors could include also the order of the constants $C_Q, C_K, C_V, C_O $ (if they cannot be chosen arbitrarily) or if they could point out the line in which they are defined.] **A6:** $C_Q = C_K = C_V = C_O$ is in constant order, as can be seen in line 673 (appendix). We directly choose one for simplicity. We will make it clear in our final version. --- We hope that our responses have addressed the concerns of the reviewer. If there are any remaining questions, we are happy to discuss them further. --- ### References [1] Du, et al. "Gradient Descent Provably Optimizes Over-parameterized Neural Networks." ICLR, 2019. [2] Jacot, et al. "Neural tangent kernel: Convergence and generalization in neural networks." NeurIPS, 2018. [3] Hron, et al. "Infinite attention: NNGP and NTK for deep attention networks." ICML, 2020. [4] Vaswani, et al. "Attention is all you need." NeurIPS, 2017. [5] Dosovitskiy, et al. "An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale." ICLR, 2021. [6] Stewart, Gilbert W. Perturbation theory for the singular value decomposition. 1998. [7] Yang, et al. "Feature learning in infinite-width neural networks." ICML, 2021. --- Rebuttal Comment 1.1: Title: Response to authors Comment: I would like to thank the authors for their response. After taking the time rereading the paper, I think that it is important to be rewritten such as to be more friendly to the reader. Specifically, to my opinion 1. Previous results should be at least cited if not restated so that the reader could easily find them. 2. The appendix should be self-contained. For example, terms like $f_{pre}$ and $F_{pre}$ should be restated and not used from the main text. 3. The lemmas in the appendix are entangled with each other, creating a sense of confusion. The statements should be more clear and concise. These are some comments that I hope to help the authors improve the writing of the paper. My main concern remains the formulation of the transformer architecture as well as the technical contribution of the paper raised by reviewer UvAq. However, I am raising my score to 6 given that the authors will create a revised version. --- Reply to Comment 1.1.1: Title: Response to reviewer sZxG Comment: We are thankful to the reviewer sZxG for the prompt response and positive support. According to your suggestions, we will re-organize related work and appendix for better understanding. To be specific, - [Related work] The subsection ‘over-parameterization for convergence analysis’ in the related work will be elaborated with more details to describe Table 3 of the appendix. - [Self-contained appendix] We will include a more detailed notation in Table 2. We will also restate the meaning of each symbol if it was only mentioned in the main body. - [Entanglement of the lemmas in appendix]. We will add more details about theorems and lemmas, and a proof flowchart to point out the relationship between each lemma. Besides, the technical difficulty and proof sketch will be added to the main text for better understanding, to make our contributions more clear. We appreciate the feedback from the reviewer and we are open to more suggestions to improve the readability and contribution of our work. Best regards, Authors
Summary: This paper presents global convergence results of shallow transformers (one self-attention layer followed by an MLP layer). The authors consider two different scalings for the factor $\tau_0$ used in the attention matrix as a function of $d_m$, the number of rows of $W_Q$ and $W_K$. Specifically, they prove that under LeCun/He/NTK initializations for the parameters of the Transformer, scaling $\tau_0$ as $d_m^{-1/2}$ requires quadratic over parametrization ($d_m = \Omega(N^2) $) to achieve convergence, while the $d_m^{-1}$ requires only linear over parametrization ($d_m = \Omega(N) $), though the later has a worse convergence rate. The paper claims that it is due the the lack of ability to capture pairwise interaction in the $\tau_0 = d_m^{-1}$ scaling. Numerical experiments are presented on synthetic data to corroborate their theoretical findings for the $d_m^{-1/2}$ scaling, as well as on MNIST, where the impact of choosing $\tau_0 = d_m^{-1/2}$ or $\tau_0 = d_m^{-1}$ on the training dynamics is discussed. Strengths: 1. This papers tackles the significant question of global convergence of Transformers. 2. The model considered is a realistic one, where both the attention module (involving pair-wise interaction between the $X^{i}$), and the MLP (applied independently on each Self-attention$(X)^{i}$ are considered (Eqs (1.1) and (1.2)). 3. The assumptions on the data-distribution are clearly discussed and Assumption 3. is verified in practice. Likewise, the assumptions of the parameters of the Transformer are clearly discussed. 4. Theorems 1 et 2 are strong results proving the global convergence of gradient descent of the shallow Transformer model for the general regression task studied in the paper. 5. The paper is focussed on the choice of the scaling factor inside the attention module, which is a forward steps towards understanding its impact. The theoretical findings as well as the experiments of the paper underlines the advantage of using the $d_m^{-1/2}$ scaling. 6. Numerical experiments are conducted to support the theoretical claims. Weaknesses: 1. Presentation: a. In my opinion the biggest weakness of the paper is its lack of clarity. The paper is not really well written, hard to read, with typos and missing words (e.g. l. 37 -42, 129, 152, 222). The paper would benefit from being rewritten in a clearer and more pedagogical manner. 2. Theoretical results: a. I read in Table 3 that SoftMax + ReLU leads to linear over parametrization. From the paper, we understand that this requires the scaling $d_m^{-1}$. However, as the authors remark it, this implies that the SoftMax operator will behave as a pooling layer. The link between the linear over-parametrization and the attention module behaving as a pooling layer is not clear. b. Lines 68 to 71. It is not clear whether the mentioned ReLU acts on each element of the sequence separately or mixes them together. 3. Experiments: a. The experiment on the synthetic data in the main paper only focuses on the $d_m^{-1/2}$ scaling. I see that the authors provide the results for the $d_m^{-1}$ scaling in the appendix. A discussion on the impact of the chosen scaling on the performance of the model on the synthetic data experiment is missing in the main paper. b. The paper focuses on regression while the experiment on MNIST seems to be a classification task. What plays the role of $y_n$ in this experiment ? Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: 1. The quartic dependency of $N$ in the dimension $d$ is not discussed in Theorem 2. Is this a realistic assumption ? 2. Could the authors provide insights on why the $d_m^{-1}$ scaling, which leads the Attention module to behave as a pooling layer, requires only linear over parametrization ? 3. What happens if the Attention module is explicitly replaced by a pooling layer, that is $A_1 = ( 1 / d_s ) \times 1_{d_s \times d_s}X W_V^T$? 4. Could the authors comment the results of the synthetic data experiment for the $d_m^{-1}$ scaling ? And compare with the $d_m^{-1/2}$ scaling ? 5. Could the authors provide additional details for the MNIST experiment ? Is it a classification task ? What is the test-accuracy of the model ? What is the training loss when setting the attention module to the identity ? 6. Is there a setup where we should chose the $d_m^{-1}$ scaling ? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 2 fair Contribution: 3 good Limitations: Yes (Appendix F). Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer p3n5 for the insightful feedback and for appreciating the significance of the paper. We address the concerns below. --- > **Q1:** [The biggest weakness of the paper is its lack of clarity.] **A1:** We appreciate the reviewer pointing out some typos. According to your suggestions, we have already corrected typos and rephrased the expression of this paper in the final version for better understanding. For example, line 129 is changed as follows: "We consider the encoder of Transformer, which can be applied to regression or classification ~~task~~ **tasks**." Apart from this, we may kindly remind the reviewer that - Reviewer YNcT agrees that ‘The paper is organized, well-written, and easy to follow in general.’ - Reviewer n1Tb and 9ERM give the score of 4 (excellent) for the presentation. We hope the revised version would have a good presentation from the reviewer's side and will appreciate it if the reviewer can reconsider this “**biggest weakness**” for evaluation. --- > **Q2:** [The link between linear over-parametrization and attention module behaving as a pooling layer is not clear.] **A2:** Due to the $d_m^{-1}$ scaling, the attention module behaves similarly to a pooling layer according to the law of large numbers, as mentioned in the introduction. In this case, the nonlinearity on $X$ disappears and thus the minimum eigenvalue of ${\Phi} {\Phi}^\top$ can be estimated via $XX^{T}$, see lines 698 and 704 for detail. Accordingly, this leads to the minimum eigenvalue in the order of $\Theta(N/d)$, and thus linear over-parameterization is enough, see Lemma 17 for detail. We will make it clear in our final version. --- > **Q3:** [Not clear: ReLU acts on each element of the sequence separately or mixes them together (Lines 68 to 71.)] **A3:** Throughout the paper, ReLU is applied for each element of the sequence. In line 68, the self-attention layer is changed to one ReLU layer, i.e., in Eq. (1.1) we have ${A}_{1} = max(0, {X} {W}_1^{T})$ in an element-wise way. --- > **Q4:** [The quartic dependency of $N$ in the dimension $d$ is not discussed in Theorem 2. Is this a realistic assumption?] **A4:** It can be achieved in practice in some cases. For example, the embedding of the tokens is usually chosen from 8-D to 1024-D [3]. Therefore, it is realistic when the token embedding is small. More importantly, for $d_m^{-1/2}$ scaling, our analysis only requires $N>d$, which is more general and valid for practical datasets. --- > **Q5:** [What happens if the attention module is replaced by a pooling layer?] **A5:** We answer this question both empirically and theoretically: - In practice: According to the reviewer’s suggestion, we also conduct the classification task (using cross-entropy loss) on MNIST of the shallow Transformer as well as the setting that the attention module is identity. As shown in Fig. 2 (training loss) and Fig. 3 (test accuracy) in the one-page pdf, training loss increases, and test accuracy drops. - In theory: On one hand, the network after replacing the self-attention can be considered as a 2-layer fully-connected ReLU network that still requires linear quadratic-parametrization under LeCUN initialization. On the other hand, we should note that the considered architecture includes a softmax layer and a 2-layer fully-connected ReLU network, if the self-attention mechanism is substituted by a fully-connected ReLU layer, it will require cubic over-parameterization for global convergence, as discussed in the paper. --- > **Q6:** [More experimental details: regression and classification, synthetic data and MNIST, different scalings.] **A6:** In our work, $y_n$ is the classification label of MNIST but regression on MNIST is still doable via the MSE loss. Taking two classes of MNIST for binary classification as an example, the output of our regression is taken by $sgn(f)$ and the training error is given by $\sum_{n=1}^N [sgn(f(x_n)) - y_n]^2$. This is commonly used in theory via the MSE loss, e.g., [4,5]. - Regression results: there is no significant difference between these two scaling schemes under this synthetic data. The task is too simple so the optimization process is easy. But for a complex regression task, e.g., regression on MNIST, as shown in Fig.4 in the one-page pdf, the convergence speed of $d_m^{-1/2}$ is faster than $d_m^{-1}$. - Classification results: According to the reviewer’s suggestion, we also conduct the classification task (using cross-entropy loss) on MNIST of the shallow Transformer. As shown in Fig. 2 (training loss) and Fig. 3 (test accuracy) in the one-page pdf, the $d_m^{-1/2}$ setting achieves faster convergence in training loss and higher test accuracy. These separation results on both classification and regression tasks provide good justification for the $d_m^{-1/2}$ setting. We will add these details about the experiments in our revised version. The source code has already been sent to AC. --- > **Q7:** [Is there a setup where we should choose $d_m^{-1}$?] **A7:** Yes. The analysis for the scaling $d_m^{-1}$ in Transformer arises from NTK literature [1,2] from the theoretical side. As mentioned in A6, the $d_m^{-1}$ scaling is a feasible choice under a small $d_m$ setting (or some simple tasks as mentioned in A6) in practice. We hope that our responses have clarified the questions of the reviewer. If there are any remaining questions, we are happy to discuss them further. --- ### References [1] Yang, et al. "Feature learning in infinite-width neural networks." ICML, 2021. [2] Jacot, et al. "Neural tangent kernel: Convergence and generalization in neural networks." NeurIPS, 2018. [3] see tensorflow website: the guide of **word_embeddings** [4] Rahaman, et al. "On the spectral bias of neural networks." ICML, 2019. [5] Liao, et al. "A random matrix analysis of random fourier features: beyond the gaussian kernel, a precise phase transition, and the corresponding double descent." NeurIPS, 2020. --- Rebuttal Comment 1.1: Comment: Thanks a lot for your rebuttal and clarifications. After re-reading the paper in details, I still believe that there is a lack of clarity in both the formulation and the way the results are presented. These are not just minor typos. **Formulation** Some sentences are not grammatically correct: e.g.: l. 228: it is common to make either the assumption that the covariance of x is an identity matrix or weak assumption, e.g., positive definite. l. 245: Besides, the stability of NTK during training, which allows us to build the connection to kernel regression predictor. There are other examples. It makes things harder to understand for the reader. **Presentation of the results** I still believe the paper lacks pedagogy in presenting the results. For instance: 1. Proposition 1 has many notations and deserves some comments for the reader. 2. In 4.4 you propose an NTK analysis. But in 4.2 and 4.3 you also consider the NTK initialization under the two scalings for $\tau_0$, which is confusing. 3. You don't specify which scaling you use in Th.3 and 4. 4. The learning algorithm is never properly introduced. Therefore, my opinion is that this paper deserves to be rewritten, where formulation has been revised everywhere and results are presented in a clearer way. I strongly believe that this would be beneficial for the paper which is a good theoretical contribution. --- Reply to Comment 1.1.1: Title: improve the presentation according to Reviewer p3n5's writing suggestions Comment: We thank the reviewer p3n5 for the constructive suggestions on writing. We have already corrected the grammatical errors, and imprecise expressions, in order to improve the readability. Indicatively, we list the revisions to the reviewer’s comments below: --- - l. 228: it is common to make either the assumption that the covariance of x is an identity matrix or weak assumption, e.g., positive definite. -> Frequently, the covariance of x is assumed to be an identity matrix or positive definite. - l. 245: Besides, the stability of NTK during training, which allows us to build the connection to kernel regression predictor. -> Besides, the stability of NTK during training allows us to build a connection on training dynamics between the Transformer (assuming a squared loss) and the kernel regression predictor. --- Regarding the presentation, according to your suggestions, we will introduce more descriptions when presenting our theoretical results. To be specific: --- **Description of Proposition 1:** We will modify lines 170-171 for clarity as follows: Define the following quantities at initialization for simplification: - The norm of the parameters: $\bar{\lambda}_Q = XXX$, $\bar{\lambda}_K= XXX$, $\bar{\lambda}_V= XXX$, $\bar{\lambda}_O= XXX$ - Two auxiliary terms: $\rho = XXX$ and $z = XXX$. Under Asm. 1, we assume that the minimum singular value of ${\bf F}\_{\text{pre}}^0$, i.e., $\alpha \triangleq \sigma\_{min}({\bf F}\_{\text{pre}}^0$) satisfies the following condition at initialization --- These variables will be included in Table 2 for better reference. Besides, more remarks will be added for a better understanding, e.g., how $\alpha$ is affected by different initialization schemes and the selection of the step-size. --- **Confusion on “NTK initialization/scaling” name:** The title of Sec. 4.4 will be modified as follows: 'NTK Analysis of Transformers' -> NTK Analysis with $\tau_0 = d_m^{-1} $' for better understanding. --- **Scaling used in Thm 3. and 4:** In these two theorems (as well as Section 4.1), the $\tau_0 = d_m^{-1}$ setting is employed. We will make it clear. --- **No description on learning algorithms:** We will include the following standard gradient descent algorithm for Transformer training in the revised version. It involves gradient updates for Transformer parameters ${\bf W}_Q$, ${\bf W}_K$, ${\bf W}_V$, ${\bf W}_O$ at the $t$-step: (denote ${\bf \theta}^0 := \\{ {\bf W}\_Q^0, {\bf W}\_K^0, {\bf W}\_V^0, {\bf W}\_O^0 \\}$ for simplicity): - ${\bf W}\_Q^{t+1} = {\bf W}\_Q^{t}- \gamma\cdot \nabla\_{{\bf W}\_Q} \ell ({\bf \theta}^t)$ - ${\bf W}\_K^{t+1} = {\bf W}\_K^{t}- \gamma\cdot \nabla\_{{\bf W}\_K} \ell ({\bf \theta}^t)$ - ${\bf W}\_V^{t+1} = {\bf W}\_V^{t}- \gamma\cdot \nabla\_{{\bf W}\_V} \ell ({\bf \theta}^t)$ - ${\bf W}\_O^{t+1} = {\bf W}\_O^{t}- \gamma\cdot \nabla\_{{\bf W}\_O} \ell ({\bf \theta}^t)$ --- Apart from the aforementioned revision, we will add the proof sketch, more details on theoretical results and experimental settings, , which has been mentioned in the previous response and [general response](https://openreview.net/forum?id=8ZveVHfmIE&noteId=hT7qrLWaaA). We will try our best to improve the readability of this paper and hope our clarifications address your concern. --- We sincerely appreciate the reviewer’s proofreading, which will definitely be beneficial to our paper.
Summary: The transformer network is a popular but complicated architecture, which are widely used in NLP and computer vision. The contribution of this paper is two-folded: 1. it aims to give convergence guarantee for a shallow transformer network (encoder part) under different weight initialisations. Also, the paper concludes that the convergence rate of the NTK initialisation is slower than the original (LeCun/He) initialisation, which is often used in realistic setting; 2. it shows that a quadratic over-parametrisation guarantees global convergence of shallow transformer in usual setting. Assumptions are verified by realistic examples. Convergence and training dynamics are also tracked in both synthetic and real-world data sets. Strengths: This paper gives a standard approach to proof global convergence by bounding the smallest singular value of the weight matrix in the last linear layer in the transformer network from above. The proof is delivered in a clear way, where notations and motivations are clearly stated. The lines seem to be flawless as I cannot find any typos or mistakes. This paper clearly has its significance in showing global convergence of transformer network in the original (LeCun/He) initialisation for the first time. Also, the over-parametrisation guarantee for global convergence is original, as far as I know. It also serves as a framework for later analysis in transformer. Weaknesses: The paper only considers the shallow (encoder part of the) transformer network and training in gradient descent, which is still far away from practical architecture. Also, it would be good if I could see the comparison between theoretical and empirical convergence in a plot. Sadly, the plots only support the theorems qualitatively but not quantitatively. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: The quantity $\alpha$ in Proposition 1seems to be sub-optimal. Is it the reason why you do not plot the theoretical decay with the empirical one? Could a log-scale plot be convincing? Also, in the proof of Lemma 1, you say by LLN we have the second equality in line (67). But the entries in $\beta'_j$ is correlated to these in $\beta_i$. How could you replace each vector respectively with $\frac{1}{d_s}\bm{1}_{d_s}$ by taking the limit before the product? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 4 excellent Contribution: 4 excellent Limitations: Yes, the authors include a section (Section F in appendix) to address the limitation of the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer 9ERM for the insightful feedback and for appreciating our theoretical analysis of Transformer and its value for the theoretical community. We address the concerns below. --- > **Q1:** [Only the shallow (encoder part of the) Transformer network is considered. ] **A1:** We agree with the reviewer that our results focus on a relatively simple architecture of Transformers. Nevertheless, even for such a simple architecture, convergence analysis from a theoretical standpoint is still unclear. Our results provide global convergence, as a good starting point, which would pave the way for analyzing practical architectures, as discussed in Appendix C.7. Besides, in our revised version, we have already included the residual block into our analysis, to capture the fundamental components of Transformer as much as possible, see [general response](https://openreview.net/forum?id=8ZveVHfmIE&noteId=hT7qrLWaaA). Furthermore, our results on shallow Transformers can still provide some findings and guidelines for practical Transformers, e.g., the impact of initialization schemes, input format, and scaling, see section 4.5 for details. --- > **Q2:** [The comparison between theoretical and empirical convergence. The quantity $\alpha$ in Proposition 1 seems to be sub-optimal. ] **A2:** We agree with the reviewer that the lower bound $\alpha$ could be loose though the order of $\alpha$ is tight. This is because our derivation involves some constants in the lower bound of $\alpha$. We believe that these constants can be improved under a refined analysis. According to your suggestions, in Fig.5 of the one-page pdf, we follow the experiment at sec.5.1 and plot the comparison between the empirical convergence and the quality $\left(1-\frac{\gamma \alpha^2}{2}\right)^t \ell (\theta^0)$. We can see that Proposition 1 provides an upper bound for the empirical loss: the order is the same (both of them are straight lines) but the slope is different. A refined analysis (e.g., smaller constant) on the lower bound of $\alpha$ leads to a better estimation. --- > **Q3:** [In the proof of Lemma 1, you say by LLN we have the second equality in line (67). But the entries in $\beta'_j \beta_i \frac{1}{d_s}\bf {1}\_{d_s}$ by taking the limit before the product?] **A3:** The limit is taken after calculating the inner product of the Jacobian, as can be seen in the proof of Lemma 1, e.g., line 787. --- We hope that our responses have alrady fixed the issues that the reviewer concerns. We are happy to discuss further questions if any. --- Rebuttal Comment 1.1: Comment: Thank You for Your detailed offical comments and rebuttal. I am positive towards this paper and am happy to keep my rating as 7.
Rebuttal 1: Rebuttal: **General Response** Dear reviewers, We appreciate your insightful comments. Below, we address three core topics raised by reviewers and then answer individually to the questions of each reviewer. Concretely, below we make the following responses: - discuss the extension to the residual Transformer, as pointed out by reviewers sZxG and UvAq - highlight the technique difficulty when compared to the prior analysis, as requested by reviewers n1Tb and UvAq - add the proof sketch for better understanding, following the suggestion of reviewers YNcT and sZxG --- > **Q1:** Extension to residual Transformer (sZxG and UvAq) **A1:** Our proof framework is general and can handle the following residual Transformer. Here we give a proof sketch to show how to achieve this. We consider the residual block in the self-attention layer: $ \bf{A}_1= \text{Self-attention}(\bf{X}) \triangleq \sigma_s \left( \tau_0 (\bf{X}{\bf{W}}_Q^\top) \left(\bf{X} \bf{W}_K ^\top\right)^\top \right) \left( \bf{X} \bf{W}_V^\top \right) + \bf{X}$, which leads to the output $\bf{f}(\bf X)$ in line 628 as $\bf{f}(\bf X)= \tau\_1 \bf{w}\_O^\top \sum\_{i=1}^{d\_s} \sigma\_r \left(\bf{W}\_V\bf{X}^\top \bf{\beta}\_i + {(\bf{X}^{(i:)})}^\top \right)$. To prove the convergence of the above residual Transformer, we need: - To prove the optimization properties of the loss function (e.g., almost smooth), the proof is nearly the same as the current version apart from the additional term to upper bound $||{\bf{X}^{(i:)}}||_2$ in Lemma 9. - To prove the lower bound of $\alpha_0$, we need to estimate the output difference on the shallow Transformer with/without residual block in Lemma 15. Using the Lipschitz continuity of softmax, ReLU, this difference can be well controlled, and thus the lower bound of $\alpha_0$ can be still achieved with an extra constant. In the final version, we will detail this convergence result after Sec.C.7 in the appendix. We hope our analysis will lay a foundation for the analysis of more general Transformer architectures. --- > **Q2:** Technical difficulty (n1Tb and UvAq). **A2:** Handling the softmax function and the $d_m^{-1}, d_m^{-1/2}$ scaling schemes beyond NTK initialization in Transformer are precisely **the two challenges**. Previous convergence analysis, e.g., [1-2], cannot be applied to our setting because of the following two issues: - Different from classical activation functions, e.g., ReLU, in softmax each element of the output depends on all input. To tackle the interplay between dimensions, we build the connection between $\Phi \Phi^{T}$ and $XX^T$ (see Lemma 15-17) to disentangle the nonlinear softmax function, where $\Phi$ contains information on the output of the softmax. By doing so, the lower bound of the minimum eigenvalue of $\Phi \Phi^{T}$ can be well controlled by $XX^T$ and the output of softmax. Accordingly, a lower bound on the minimum singular value of ${F}\_{pre}$ in Proposition 1 can be obtained. - Regarding different initializations and scaling, previous NTK-based analysis is only valid under the $d_m^{-1}$ setting (the softmax degenerates to an all-one vector) but is inapplicable to the realistic $d_m^{-1/2}$ setting, as discussed in the introduction. To tackle this issue, we analyze the input/output of softmax under LeCun/He initialization (see Lemma 15) and identify the optimization properties of the loss function (see the proof of Proposition 1) for global convergence under the finite-width setting. We will include this discussion in our final version and hope this clarification would address the reviewers’ concerns. --- > **Q3:** Proof sketch (YNcT and sZxG). **A3:** Here we present the proof sketch of our global convergence results under various initialization schemes. [**Proof sketch**] The main idea of our convergence analysis is based on the variant of Polyak-Lojasiewicz (PL) inequality, i.e., (under simple calculation) $|| \nabla \ell({\theta}) ||\_2^2 \ge 2 \lambda\_{\min}({K}) \ell({\theta}) \ge 2 \lambda\_{\min}({F}\_{pre} {F}_{pre}^{\top}) \ell({\theta})$. Thus if the minimum singular value of ${F}_{pre}$ is strictly greater than 0, then minimizing the gradient on the LHS will drive the loss to zero. To this end, we take Proposition 1 as an example. It can be split into two parts: - By induction, at every time step, each parameter in Transformer can be bounded w.h.p; the minimum singular value of ${F}\_{pre}$ is bounded away for some positive quality at the initialization point. - We prove that the Lipschitzness of the network gradient (Lemma 14), which means the loss function is almost smooth. By doing so, combining the above two results, the loss function global convergence can be achieved. Furthermore, in Theorem 1 and Theorem 2, we consider several particular initializations and scaling and aim to provide a positive lower bound for ${F_{pre}}$, satisfying the conditions in Proposition 1. We will include it in our revised version for better understanding. --- ### References [1] Du, et al. "Gradient Descent Provably Optimizes Over-parameterized Neural Networks." ICLR, 2018. [2] Nguyen, et al. "On the proof of global convergence of gradient descent for deep relu networks with linear widths." ICML, 2021. Pdf: /pdf/ef62d4d7c74fbf909accc038976b6b6e81186b31.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: This paper proves the global convergence of the shallow transformer under a more realistic setting. While I don't often read theoretical machine learning papers, I found some of the analysis interesting. However, the paper does feel a bit incremental compared to previous work. Strengths: I would like to commend the authors for their efforts in conducting a more realistic analysis. I also appreciate their inclusion of analysis, discussion, experiments on artificial data, and experiments on real data, which makes the paper more complete. Weaknesses: The paper does feel a bit incremental at times. It would be helpful to see more discussion of how the analysis could be applied to practical experiments. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: In what way does the proposed analysis different from prior analysis, aparting from a more realistic setting? To achieve the more realist setting, do the authors have to come up with a different analysis technique to do that? It is not clear to me how novel the analysis techniques are. Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: Most of the analysis is done on shallow Transformers as pointed out by the authors. Moreover, the practical implication of the analysis is not yet clear. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer n1Tb for the insightful feedback. We address the concerns below. --- > **Q1:** [Most of the analysis is done on shallow Transformer. The practical implication of the analysis is not yet clear.] **A1:** We agree with the reviewer that this setting is a bit far away from practical Transformers. However, from a theoretical side, even for a shallow Transformer (with a realistic setting), the global convergence analysis is unknown and is still an open question in the deep learning theory community. The contribution on the convergence analysis for Transformer has already been recognized by the rest reviewers. In our revised version, we have already included the residual block into our analysis, to capture the fundamental components of Transformer as much as possible, see [general response](https://openreview.net/forum?id=8ZveVHfmIE&noteId=hT7qrLWaaA). Our theoretical results are still able to provide some guidelines for practical settings: - Initialization: our theoretical result suggests using He/LeCun initialization instead of NTK initialization because the former initializations can lead to faster convergence. - Input format: We show how the data formulation has an impact on the performance of the Transformer. Specifically, given a vector input, if we formulate it along the sequence dimension, we can not guarantee the training converges from theory and it does not converge empirically. If we formulate along the embedding dimension, then the training can converge. - Scaling: Our theory identifies why the self-attention layer degenerates as a pooling layer under the $d_m^{-1}$ setting with an infinite-width setting. For small width $d_m$, there is no significant difference for these two scaling schemes on the convergence; But for a large enough $d_m$, then the scaling $d_m^{−1/2}$ admits a faster convergence rate than that of $d_m^{-1}$ because in this case, the self-attention layer degenerates as a pooling layer more or less. --- > **Q2:** [In what way does the proposed analysis different from prior analysis, apart from a more realistic setting? To achieve the more realistic setting, do the authors have to come up with a different analysis technique to do that?] **A2:** We will add a paragraph about the technical difference from prior work in the final version, see the [general response](https://openreview.net/forum?id=8ZveVHfmIE&noteId=hT7qrLWaaA) for details. --- We hope that our responses have addressed the concerns of the reviewer. If there are any remaining questions, we are happy to discuss them further.
Summary: The paper presents convergence results for shallow transformer networks for commonly used Gaussian-based initialization schemes in deep learning (LeCun, He) and different scaling regimes under finite overparameterization. In addition, the authors provide limiting neural tangent kernel (NTK) analysis for its minimum Eigenvalue and stability during training. They consider the encoder part of the transformer for the analysis which consists of a single-self attention layer, followed by ReLU, average pooling and a linear layer which maps to a scalar. The analysis is presented for gradient descent (GD) training with square loss applicable to classification/regression, where all parameters apart from the ReLU layer are trainable. The convergence framework for $1\sqrt{d_m}$ scaling uses three assumptions: bounded data, linearly independent tokens in each input example and dissimilarity of two different data examples. The $1/d_m$ scaling regime additionally uses a PDness assumption on the covariance matrix. The paper also provides experiments on synthetic gaussian data and on MNIST data for a Vision Transformer (ViT) tracking convergence under scaling regimes, width and NTK evolution. Strengths: **Importance and motivation**: Very interesting work, I enjoyed reading it. The paper certainly takes a step and provides novel results towards understanding the training dynamics of shallow transformers and the overparameterization required for global convergence - an important research direction considering their prevalent usage. The authors do a good job in motivating the need to study different scaling regimes in the introduction section as well. **Organization and writing**: The paper is organized, well-written and easy to follow in general. **Assumption verification**: I like the important remarks and the experimental evidence authors use to back the assumptions, particularly for assumption 3 where they provide experiments on IMDB and MNIST dataset. Convergence results discussion in sec 4.5 on slower convergence of $1/d_m$ scaling and NTK initialization being slower than He/LeCun provide good insights into the theoretical results. Weaknesses: **Missing intuitive explanations**: Standard convex analysis requires PL inequality + smoothness of the loss function to get a linear rate. Per my knowledge, the proof follows a similar(ish) recipe to [1], where they provide linear rates for NNs using the local gradient-Lipschitz property of the loss + a variant of PL, which gives a good analogue to convex analysis (as briefly mentioned already in the related work sec). I think it’d be nice for the reader if the paper contains a longer proof sketch with such intuitive comments/remarks for the proof steps. Some more suggestions on this can be found in the questions section. **Minor bugs and typos**: There are some minor issues in the proofs and some typos in the paper: please refer to questions. **Regarding assumption 3**: In its current form, assumption 3 forms a tail bound on the similarity of the respective empirical covariance matrices. This covariance inner product equals similarity of $\mathbf{X}_i, \mathbf{X}_j$ + some cross terms - how well does assumption 3 reflect dissimilarity as mentioned in the paper? It would have been nice had there been plots for similarity of $\mathbf{X}_i, \mathbf{X}_j$ along with the current ones. **Concerns/questions regarding the results**: I have several queries regarding separability constant $\hat{c}$, lower bound on $N$, etc. Please see the questions section. **Applicability on contextual token data**: It might be interesting to see if/how the results apply to the commonly used contextual token data setting [2, 3] for theoretical analysis of transformers. Eg: Assm. 2 fails if sparsity [3] isn’t small enough, a small discussion might be something to consider. Technical Quality: 3 good Clarity: 3 good Questions for Authors: **Some more suggestions on intuitive remarks for proof sketch**: 1. It'd be helpful to mention the condition on gradient that makes the variant of PL and is used to prove convergence. 2. Breaking down (briefly) how the local Lipschitz property of loss is proved. **Typos/minor bugs**: 1. $ \gamma <=1/C$ in prop. Suggestion: It’d be helpful to specify that $C$ signifies local Lipschitz constant for the gradient. 2. Around line 653 in the appendix: Should it be $\theta^{t+1}, \theta^{t}$ instead of $\mathbf{W}^{t+1}, \mathbf{W}^{t}$? Also, in the same proof, second last step should be $\sigma_{min}^2$ and $||f^t-\mathbf{y}||^2$ - there might be a 2 factor missing in the result as well. 3. Lemma 15 proof, line 661: I don't think $\mathbf{XX}^T$ (correctly mentioned in Assm. 3 - $\mathbf{X}^T\mathbf{X}$) should be called the covariance matrix as the data is placed along rows. **Results**: 1. For thm. 1 remark: For larger $\hat{c}$, the exp quantity grows - which seems a bit counterintuitive in terms of separability of data. 2. In the $1/d_m$ regime there’s an additional req. On $N$, but not in the $1\sqrt{d_m}$ regime - is there any obvious reasoning for this or is it just a proof artifact 3. Is there any step-size $\gamma$ (or $C$, same thing) difference in $1/d_m$ vs $1/\sqrt{d_m}$? As far as I know, typically in NNs in the latter it is $O(1)$ [5] and $O(d_m)$ in the former [4]. 4. It’d be nice to know the authors' thoughts about the tightness on the width condition, especially in the $\sqrt{1/d_m}$ regime. Limitations section says the result doesn’t cover cross-entropy loss, can such a PL variant + local Lipschitz technique be applied to it? I’ve mostly seen such analysis with square loss [1, 5]. **References** [1]. Nguyen, Q.N. and Mondelli, M., 2020. Global convergence of deep networks with one wide layer followed by pyramidal topology. Advances in Neural Information Processing Systems, 33, pp.11961-11972. [2]. Li, H., Wang, M., Liu, S. and Chen, P.Y., 2023. A theoretical understanding of shallow vision transformers: Learning, generalization, and sample complexity. arXiv preprint arXiv:2302.06015. [3]. Oymak, S., Rawat, A.S., Soltanolkotabi, M. and Thrampoulidis, C., 2023. On the Role of Attention in Prompt-tuning. arXiv preprint arXiv:2306.03435. [4]. Mousavi-Hosseini, A., Park, S., Girotti, M., Mitliagkas, I. and Erdogdu, M.A., 2022. Neural networks efficiently learn low-dimensional representations with sgd. arXiv preprint arXiv:2209.14863. [5]. Du, S.S., Zhai, X., Poczos, B. and Singh, A., 2018. Gradient descent provably optimizes over-parameterized neural networks. arXiv preprint arXiv:1810.02054. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 4 excellent Limitations: I couldn’t find the code in the supplementary or any anonymous link to reproduce the results. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer YNcT for the insightful feedback and for appreciating our theoretical analysis of Transformer and its value for the theoretical community. We address the concerns below. --- > **Q1:** [Suggestions on intuitive remarks for proof sketch.] **A1:** Thanks for the suggestion, we have already discussed about the proof sketch, see the [general response](https://openreview.net/forum?id=8ZveVHfmIE&noteId=hT7qrLWaaA) for details. We will add the proof sketch as well as ituitive remarks for better understanding. --- > **Q2:** [Typos/minor bugs.] **A2:** We are thankful to the reviewer for spotting the typos. We have already corrected them in the revised version. For example, in proposition 1, it should be $\gamma \le 1/C$. Around line 653, it should be $\bf \theta^{t+1}$, $\bf \theta^{t}$, $\sigma_{min}^2$, $||f^t-\mathbf{y}||^2$, and a factor of 2. In line 661, we will remove the phase ‘covariance matrix’ to avoid confusion. --- > **Q3:** [Assumption 3.] **A3:** We plot the figure for $<X_i, X_j>$ in the same MNIST dataset as shown in **Fig.1** in the one-page pdf. We can see a similar exponential decay for the inner product of two data points. --- > **Q4:** [For thm. 1 remark: For larger \hat{c}, the exp quantity grows - which seems a bit counterintuitive in terms of separability of data.] **A4:** As $\hat{c}$ increases, the probability in theorem 1 decreases. Meanwhile, from theorem 3, we can see that a larger $\hat{c}$ results in less separable data (line 199 should be corrected). It makes sense as less separable data leads to a lower probability. Sorry for the confusion because of the typo in line 199 and we have already changed it to "A larger $\hat{c}$ results in less separable data" in the revised version. --- > **Q5:** [Applicability on contextual token data.] **A5:** We are thankful to the reviewer for pointing out these interesting works on the analysis of contextual token data and we believe it would be of great interest to generalize our result to the classification task with contextual token data. We have already included a discussion in our revised version. When extending our current analysis to contextual token data, we need a refined analysis framework to distinguish that label-relevant (irrelevant) token. Besides, Assumption 2 can be slightly relaxed to the case that $\lambda_{min}(X_k X_k^{T}) > 0$ for any $k$ holds with a high probability. Nevertheless, a new proof framework on the minimum eigenvalue under contextual data is required if we want to get rid of Assumption 2 and its variants. --- > **Q6:** [In the regime $d_m^{-1}$ there’s an additional req. On $N$, but not in the regime $d_m^{-1/2}$- is there any obvious reasoning for this or is it just a proof artifact.] **A6:** This is a proof artifact because the requirement $N > \Omega(d^4)$ is required to provide a lower for the minimum eigenvalue of a specific matrix in the case of $d_m^{-1}$. To be specific, we need this assumption to ensure $\lambda\_{\min} ( {\boldsymbol{Z^*}}^\top \boldsymbol{Z^*} ) \ge N/d -9N^{2/3}d^{1/3} = \Theta (N/d)$. --- > **Q7:** [Is there any step-size (or $C$ , same thing) difference in $d_m^{-1}$ vs $d_m^{-1/2}$?] **A7:** Yes, these two different scaling schemes have different $c_1$, $c_2$, $c_3$, see line 652. This leads to different C. We will make it clear in our final version. --- > **Q8:** [It’d be nice to know the authors' thoughts about the tightness on the width condition, especially in the regime $d_m^{-1/2}$.] **A8:** In the regime $d_m^{-1/2}$, the lower bound of the minimum eigenvalue $\lambda_0$ is in the constant order, which is tight. Based on this, by studying the relationship between $d_m$ and $\lambda_0$, see Eq. (48) and (49) for details, we can prove that quadratic over-parameterization is required. This quadratic over-parameterization requirement could be relaxed if a better relationship is given. However this is unclear beyond our proof technique and we will add a remark for this in the revised version. --- > **Q9:** [Limitations section says the result doesn’t cover cross-entropy loss, can such a PL variant + local Lipschitz technique be applied to it? I’ve mostly seen such analysis with square loss.] **A9:** It appears feasible to extend our current analysis from the squared loss to the cross-entropy (CE) loss. The proof idea is still based on the PL variant + local Lipschitz technique but an extra data distance assumption is needed to ensure the global convergence, refer to assumption 4.2 in [2] and assumption 2.1 in [3] for details. Note that under the CE loss, [2] and [3] deal with fully-connected networks with much stronger over-parameterization than that of the squared loss setting, therefore we think the CE loss requires a stronger quadratic over-parameterization condition for Transformer. We leave this as future work. --- > **Q10:** [Code] **A10:** According to the rebuttal rule, we have sent the code to AC. --- ### References [1] Nguyen, et al. "Global convergence of deep networks with one wide layer followed by pyramidal topology." NeurIPS, 2020. [2] Zou, et al. "Gradient descent optimizes over-parameterized deep ReLU networks." Machine learning, 2020 [3] Allen-Zhu, et al, "A convergence theory for deep learning via over-parameterization." ICML, 2019. --- Rebuttal Comment 1.1: Comment: Thank you to the authors for the detailed response to my comments, suggestions and the rebuttal. I am happy to keep my score of 7.
null
null
null
null
A Unified Discretization Framework for Differential Equation Approach with Lyapunov Arguments for Convex Optimization
Accept (poster)
Summary: The paper proposes a unified discretization framework for the differential equation (DE) approach to convex optimization. The DE approach relates optimization methods to continuous DEs with Lyapunov functionals, providing intuitive insights and convergence rate estimates. However, there has been a lack of a general and consistent way to transition back to discrete optimization methods. The paper introduces the concept of "weak discrete gradient" (wDG), which consolidates the conditions required for discrete versions of gradients in the DE approach arguments. The authors define abstract optimization methods using wDG and demonstrate that many existing optimization methods and their convergence rates can be derived as special cases of their setting. Strengths: Comprehensive Approach: The paper addresses an important gap in the DE approach to convex optimization by providing a unified discretization framework. It consolidates the conditions required for discrete versions of gradients, simplifying the overall analysis. Clarity: The pedagogical approach of the paper is appreciated. Most of the theory is constructive and all new aspect are clearly presented. Abstract Optimization Methods: The introduction of abstract optimization methods using wDG allows for a simpler view and analysis of existing optimization methods. It provides a systematic framework for deriving convergence rate estimates. Potential for new methods: The framework allows for the development of new optimization methods by combining known DEs with wDG. It opens up opportunities for researchers to explore novel approaches in the optimization field. Weaknesses: Knowing the accelerated gradient flow: The biggest problem of the approach presented in this paper is the requirement that one needs to know the accelerated gradient flow to derive its discretization - accelerated gradient flow that has been derived from an accelerated algorithm. Hence, the practicability of the approach is limited as one requires to know already an accelerated method before deriving another one. Case-Specific Discrete Arguments: The authors acknowledge the need for adjustment and optimization in the construction of optimization methods: while the framework consolidates the case-specific discrete arguments in the definition of wDG, there may still be some complexity involved in finding new pratical wDGs for different methods. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: - Can the framework be easily extended to handle constrained optimizations (for instance, using indicator functions)? - Theorem 4.2 list several wDG for smooth and/or strongly convex function. Is it possible to find easily other wDG for sunction that does not satisfies those assumptions? (for instance, non convex smooth functions, or functions with smooth second-order derivatives?) - Can the limitations of the framework, such as the non-optimality of certain convergence rates, be overcome by considering alternative DEs and Lyapunov functionals? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 4 excellent Contribution: 2 fair Limitations: - Non-Universal Applicability: The paper acknowledges that certain optimization methods, such as the Douglas-Rachford splitting method and methods based on Runge-Kutta numerical methods, may not fit into the current framework. The applicability of wDG to these methods remains an open question. - Practicability: While the paper indeed overcome some limitation from previous work in this vein, given the previous points in the Weakness section, it seems the approach overcomplicate the design of new accelerated methods in other settings. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you very much for your careful reading and suggestions. We really appreciate it. Below are our responses to your criticisms in Weaknesses and Questions. >Weakness 1. Knowing the accelerated gradient flow: We feel the reviewer misunderstood our problem setting. Let us explain our standpoint again as follows. It is surely true that our framework is on how we discretize ODEs in the ODE approach, and it cannot be invoked if we do not have any ODEs (and Lyapunov functionals). But this does not mean that we have to have a new (discrete) optimization method first to start a new generalization; instead one can start with freely exploiting new ODEs with rate-revealing Lyapunov functionals (In fact, Celine--Taylor--Bach (2022, arXiv:2205.12772) derived ODE and Lyapunov functional without basing on existing optimization methods, and Kim--Yang (2023, ICML2023) integrated known accelerated gradient flows to derive a new ODE that has good natures of both). Then if the ODE only uses gradients (i.e. does not involve Hessian or higher derivatives), it should be an easy task to define an abstract wDG method for it and write down a discrete convergence estimate just by following the proof in the continuous case (as we demonstrated in the present paper for known ODEs/Lyapunov functionals). In this sense, we claim that now people can almost forget about the final discretizations, and purely concentrate on the exploration of new ODEs/Lyapunov functionals. That is exactly the best outcome of our framework, originally hoped in the seminal work by Su--Boyd--Candes (2014). These explanations were given in L62--69 (although it was a bit hasty due to the limitation of the space.) >Weakness 2. Case-Specific Discrete Arguments: We feel there is some confusion again in the reviewer. What we meant by "Although there is still room for minor adjustments"(L72) is described in L303--309; roughly speaking, the choice of some additional variables, and/or optimization of time-stepping. These rooms are not necessarily negative, and in some case we can utilize them for further optimizing resulting optimization methods. Even if we leave these freedoms to some obvious choices, the framework gives a convergent method and its rate. We agree with the reviewer's view that finding a new, good wDG would be a nontrivial task. But this could be done as in usual research on creating new optimization methods; suppose we are considering some modification of an existing first-order method. Then, in our framework, it suffices to find out what serves as a wDG there, and check if it satisfies the condition of being a wDG. If so, then we know the convergence and the rate. It is also a good idea to start with an existing wDG, and consider its modification, keeping the conditions for wDG satisfied. In these ways, we believe our framework can serve as a good working environment also for considering new discretizations. These were briefly explained in L62--69. >Question 1. Can the framework be easily extended to handle constrained optimizations (for instance, using indicator functions)? Yes. At this moment we have at least two directions for achieving this. The first direction is to follow the DE approach for mirror descent methods, which is very briefly mentioned in Rem 1.1. The second direction is to consider including indicating functions, as suggested by the reviewer. Because a linear combination of wDGs is again a wDG, we can consider a new objective function $\tilde{f} = f + \delta_C$ (where $\delta_C$ is the indicating function for the domain $C$) and their wDGs. These topics are outside the scope of this paper (due to the space restriction), but we have already confirmed that they work, and will report somewhere in the near future. >Question 2. Theorem 4.2 list several wDG for smooth and/or strongly convex function. Is it possible to find easily other wDG for function that does not satisfies those assumptions? (for instance, non convex smooth functions, or functions with smooth second-order derivatives?) We cordially ask the reviewer to distinguish problems in our framework and those in the entire DE approach. For non-convex and/or non-smooth functions, difficulties first lie in the DE approach itself; extensions to these objective functions are hoped, but are still left to be investigated, as far as the present authors understand. For functions with higher smoothness, investigations should be done both in the DE approach and our framework. As far as we know, no DE approach theory has been established for such objective functions. Even if it does, the authors are not sure if it can be easily copied to discrete setting with our wDG framework, since there we might be demanded to use Hessians. Whether some difference of wDGs can work for approximating such Hessians or not is an interesting research topic, but we cannot say anything conclusive at this moment. >Question 3. Can the limitations of the framework, such as the non-optimality of certain convergence rates, be overcome by considering alternative DEs and Lyapunov functionals? Thank you for pointing out this important point. Yes, we believe so. For example, in the example we showed in "Limitations", the rate in our paper $O((1-\sqrt{\mu/L})^k)$ (NAG rate) comes from the continuous estimate in Thm. 2.5, the term $e^{-\sqrt{\mu}t}$. On the other hand, the optimal rate is known to be $O((1-\sqrt{\mu/L})^{2k})$ (a method was given in Scoy et al. (2017), which was then shown to be optimal in Drori--Taylor (2022); the arguments are more direct, i.e., not in the ODE approach). In the ODE approach, if we find a combination of DE+Lyapunov functional that exhibits the rate $e^{-2\sqrt{\mu}t}$, then wDG shall be able to derive methods achieving the optimal rate. Several teams are now working to solve this open question. --- Rebuttal Comment 1.1: Title: Reviewer's response Comment: Dear authors, Thank you for your clear response. While I still believe that the results in the paper were achievable because we already know the accelerated algorithm (especially how to define the wDG in the paper), the authors did convince me of the flexibility of their approach. As shown in the paper and the rebuttal, the paper potentially open new perspectives for novel analysis. I will raise my score to borderline accept - a strong accept would have been possible if there were some results that went beyond of what we already knew. Thank you again for your response and clarification! --- Reply to Comment 1.1.1: Comment: Thank you for your kind understanding. We understand your point that a new method better than existing methods would have made our paper more decisive. But at the moment, this is left as the most important open question in the whole ODE approach; we hope by our framework the search for better optimization ODEs will become more active research topic.
Summary: This paper focuses on the design of convex optimization schemes based on a general discretization framework applied to differential equations (DE). The authors heavily rely on Lyapunov based inequalities to provide convergence rates. The authors build a systematic framework on top of the one proposed by Su, Boyd and Candès, allowing to provide an automatized analysis, thanks to the concept of weak discrete gradient (wDG). This weaker notion of DG allows one to overcome the assumptions needed for discrete versions of gradients in the DE approach. The resulting convergence rates turn out to be competitive, even though they might not be optimal. Strengths: The analysis of a large class of convex optimization algorithms can now be performed in an automatized fashion. Many existing optimization methods together with their rate estimates, can be retrieved as special cases of the authors' method. This offers a simpler perspective. The presentation of the paper is great, the authors make an effort to avoid loosing the reader into too many technicalities. Not being an expert in such convex optimization schemes, it was still enjoyable to follow most of the details. The paper is well-written. Weaknesses: The estimated rates are not always optimal as demonstrated by the authors (some sub-cases of Theorem 5.5 for strongly convex functions), which indicates that there are still specific efforts to be pursued for certain algorithms. Technical Quality: 3 good Clarity: 3 good Questions for Authors: l81: The abbreviation PŁ is not defined in the main text. Please indicate what it stands for. (Polyak-Lojasiewicz) Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: There is a clear section dedicated to limitations with an explicit list: - some methods do not fall into the authors' framework, e.g., Douglas–Rachford splitting method - adjustment (for instance of time-stepping) could be improved for wDG - the obtained rates are not always optimal Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you very much for your careful reading and suggestions. We really appreciate it. Below are our responses to your criticisms in Weaknesses and Questions. >Weakness 1. The estimated rates are not always optimal as demonstrated by the authors (some sub-cases of Theorem 5.5 for strongly convex functions), which indicates that there are still specific efforts to be pursued for certain algorithms. Thank you for pointing out this important point. We agree with your view that some rates in the present paper do not achieve known optimal rates, as we explicitly declared in Limitations. However, this comes from the limitation in the currently known "ODE + Lyapunov functional," not from our framework. Let us explain this by taking the example we showed in "Limitations." The rate in our paper $O((1-\sqrt{\mu/L})^k)$ (accelerate gradient method (AGM) rate) comes from the continuous estimate in Thm. 2.5, the term $e^{-\sqrt{\mu}t}$. On the other hand, the optimal rate is known to be $O((1-\sqrt{\mu/L})^{2k})$ (a method was given in Scoy et al. (2017), which was then shown to be optimal in Drori--Taylor (2022); the arguments are more direct, i.e., not in the ODE approach). In the ODE approach, if we find a combination of DE+Lyapunov functional that exhibits the rate $e^{-2\sqrt{\mu}t}$, then wDG shall be able to derive methods achieving the optimal rate. To our best knowledge, such a combination is not yet known---but recently steady progress has been made. We the authors are aware of an ODE that seems promising, although we have not yet succeeded in finding a Lyapunov functional for it. Sun et al. (2020) derived a combination of ODE+Lyapunov functional that achieves a rate better than AGM, but it is still not optimal. In viewing this recent information, we expect to find the desired ODE+Lyapunov functional in the near future. >Question 1. l81: The abbreviation PŁ is not defined in the main text. Please indicate what it stands for. (Polyak-Lojasiewicz) Thank you so much for your careful reading. We originally spelled out this, but we somehow dropped it in finalizing the submitted paper. We will appropriately include the full spell. --- Rebuttal Comment 1.1: Comment: Thank you for your responses, I will retain my score.
Summary: The paper considers unconstrained convex optimization problems, studied from the angle of their close relation with differential equations. Specifically, the authors propose a framework for translating results from continuous time methods to their discrete time counterparts. They propose Discrete Gradients (a technical tool from Numerical Analysis) as a means for achieving this, and suitably adapt the concept to the convex optimization scenario (via weak Discrete Gradients). As such, the complexity of showing convergence for DE discretizations is mostly transferred to finding the appropriate wDG. The authors then show how this framework can recover known convex optimization results for some important methods in the class. Strengths: The paper follows in a line of work studying discrete gradient methods as optimization schemes. While the central concept of Discrete Gradients is not novel, its weak variant used in this paper and the unifying framework built around it are, to the best of my knowledge. The studied topic is of considerable interest to the optimization community, and the tools introduced in this work are welcome additions. The paper is written in a crisp and clear style (making for a very pleasant read), and the authors critically discuss their results in (mostly) adequate detail. Weaknesses: - Reference [1] seems to have quite some overlap in terms of results and considered classes of functions. In this light, a more detailed comparison is warranted in terms of e.g., assumptions -- especially on stepsize values, type of results, classes of functions, and techniques. - The usage of the "free" iterate z (introduced in Def. 4.1) when devising the discrete accelerated schemes is opaque and would benefit from further discussion. * Minor (typos and the like): - Line 315: of in - Line 315: the Hessian - you use (i), (ii) in Thm 4.2, but on page 4 line 131 you use the same notation for other concepts, which allows for confusion [1] Ehrhardt, M. J., Riis, E. S., Ringholm, T., and Schönlieb, C.-B. (2018). A geometric integration approach to smooth optimisation: Foundations of the discrete gradient method. arXiv preprint, arXiv:1805.06444. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: - Can the authors provide some discussion on why, e.g., the max stepsize for Gradient Descent (line 1 in table 2) is more restrictive compared to the known strict upper bound of 2/L? It is not clear whether this kind of situation comes from some insurmountable limitations of the framework. - Reference [2] is a missing reference for Lyapunov-based approaches. The paper has the same goal as the present work but achieves it via a different route, involving formulas for constructing Lyapunov functions. It would be good to briefly discuss it in the section "Relation to some other systematic/unified frameworks" - Reference [3] might be relevant for motivating the unified approach [2] De Sa, C. M., Kale, S., Lee, J. D., Sekhari, A., & Sridharan, K. (2022). From Gradient Flow on Population Loss to Learning with Stochastic Gradient Descent. Advances in Neural Information Processing Systems, 35, 30963-30976. [3] Bansal, N., & Gupta, A. (2017). Potential-function proofs for first-order methods. arXiv preprint arXiv:1712.04581. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: The authors address the limitations in a separate and very clearly written section and go into adequate detail describing them. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you very much for your careful reading and suggestions. We really appreciate it. Below are our responses to your criticisms in Weaknesses and Questions. >Weakness 1. Reference [1] seems to have quite some overlap in terms of results and considered classes of functions. In this light, a more detailed comparison is warranted in terms of e.g., assumptions -- especially on stepsize values, type of results, classes of functions, and techniques. Thank you so much for pointing out this important point. We would like to add the following paragraph in "Relation to some other systematic/unified frameworks": ``` As said in Section 3, in the field of numerical analysis, the use of discrete gradients has been tried. Among them, Ehrhardt et al. (2018) is a pioneering work that comes with several theoretical results. Both this and the present work aim at convex, strongly-convex, and the PL functions (in the Appendix, in the present paper). The scope of Ehrhardt et al. (2018) was limited in the sense that they considered only discretizations of gradient flows with strict discrete gradients. Our target ODEs and discretizations are not limited to that, but as its price, our rate is worse in some strict discrete gradient discretizations of gradient flow. This comes from the difference in proof techniques: they proved convergence rates directly and algebraically, while our analysis is via Lyapunov functionals. They also gave several theoretical results besides the convergence analysis, such as the (unique) existence of solutions, and step-size analysis which are important in actual implementations. Whether these two framework could be unified would be an interesting future research topic. ``` >Weakness 2. The usage of the "free" iterate z (introduced in Def. 4.1) when devising the discrete accelerated schemes is opaque and would benefit from further discussion. This is a very good point, and thank you for suggesting this. In fact, we can think of several possibilities for z, which are discussed in the proof of Thm. 5.4 (Appendix E.3) and Thm. 5.5 (E.4). Please note also that this can be regarded as a modification of the three points descent lemma (see, for example, https://fa.bianp.net/blog/2017/optimization-inequalities-cheatsheet/). We agree that our explanation about how we utilize z is not quite clear. We would like to add the following paragraph after Def. 4.1. ``` Note that (8) can be regarded as a modification of the three points descent lemma, where the third variable z is utilized to give some estimates. The freedom in variable z in (8) is fully utilized also in this paper; see Thm. 5.4 and 5.5 and their proofs. ``` >Weakness 3. Minor (typos and the like) Thank you so much for your very careful reading. We will fix them in the final version. >Question 1. Can the authors provide some discussion on why, e.g., the max stepsize for Gradient Descent (line 1 in table 2) is more restrictive compared to the known strict upper bound of 2/L? It is not clear whether this kind of situation comes from some insurmountable limitations of the framework. This is a very good point. We noticed this, and our understanding is as follows. As far as we know, the bound $2/L$ is shown in Nesterov (2004; the textbook, Thm 2.1.14), with the rate $c_1/(c_2 + k c_3)$ ($c_1, c_2, c_3$ are some constants), which is slightly different from the rate in our paper $c/k$ (although asymptotically they are the same). The rate $c/k$ is obtained in the ODE approach and the designated Lyapunov functional, but this time, we cannot have the strict bound $2/L$. In this sense, we understand this is a limitation of the ODE approach, or more specifically, the limitation in the known combination of ODE and Lyapunov functional. Whether there exists a better combination such that we can have the rate $c/k$ with the strict bound $2/L$ or not is an interesting open question. >Question 2. Reference [2] is a missing reference for Lyapunov-based approaches. The paper has the same goal as the present work but achieves it via a different route, involving formulas for constructing Lyapunov functionals. It would be good to briefly discuss it in the section "Relation to some other systematic/unified frameworks" Thank you so much for letting us know of this. We would like to add the following paragraph in "Relation to some other systematic/unified frameworks" ``` Another closely related work is Skhari et al. (2022), which proposed a framework to construct Lyapunov functionals for continuous ODEs. This is strong in view of the fact that generally Lyapunov functionals can be found only in ad hoc ways. Instead, they considered only the simplest gradient descent (and its stochastic version), while the main focus of the present paper lies in the discretizations. ``` >Question 3. Reference [3] might be relevant for motivating the unified approach Thank you for this information. We will include this reference in Introduction where we survey the history of the ODE and Lyapunov approach. --- Rebuttal Comment 1.1: Title: Thank you for your response Comment: Thank you for your response to the questions. I will maintain my score. --- Reply to Comment 1.1.1: Comment: Thank you so much for your quickest and warm response. We sincerely thank all of your kind effort.
Summary: This paper introduces a family of oracles called wDG (weak discrete gradient) verifying (8). This family constraint (8) has been created in order to make proof works in the discrete setting and has been inspired by the observation of what happens in the continuous setting. As expected, authors propose results obtained running classical algorithms replacing the gradient by any of those oracles. This new framework allows including many popular algorithms using different oracles such as proximal operator. Strengths: The paper is concise and clear. The problem is motivated, and well explained. The new framework is fairly large and authors detailed examples of classical algorithms that are included in their framework. Weaknesses: - It might be because I did not know this line of work, but based on the title, I thought of a completely different result, on how, using gradients on the current iterate, efficiently discretize an ODE. Finally, the proposed framework seems to be more about the nature of the oracle itself, not the algorithm, and some of them (prox, average gradients, ...) are not always accessible. - Thm 2.3: From this thm, it seems that the convergence rate can be arbitrarily good. Authors should discuss this here and mention that discretization error unables to converge faster than the proven lower bound $1/t^2$. $\underline{\text{Typos:}}$ - l.23: « strong convex » -> strongly - l.164: « in of » - eq 10: I think \beta and \gamma have been permuted - Thm 5.2: « Let $\bar{\nabla} f$ be a wDG of $f$ […] let f be […] » -> « Let f be … let $\bar{\nabla} f$ be …. » Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: NA Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: NA Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you very much for your careful reading and suggestions. We really appreciate it. Below are our responses to your criticisms in Weaknesses. >Weakness 1. It might be because I did not know this line of work, but based on the title, I thought of a completely different result, on how, using gradients on the current iterate, efficiently discretize an ODE. Finally, the proposed framework seems to be more about the nature of the oracle itself, not the algorithm, and some of them (prox, average gradients, ...) are not always accessible. A. We agree with the comment that if we regard weak discrete gradient as an oracle, the main focus in our paper is exactly "the nature of the oracle itself," i.e., we investigate conditions wDGs should satisfy. We could not catch what the reviewer meant by the second sentence. Below are some possibilities we considered. First, if the reviewer means that some wDGs (prox, average gradients, ...) are not acceptable as oracles for first-order methods because they lead to implicit methods, we would like to claim that this does not immediately mean that these wDGs are impractical. For example, for minimization problems with a regularization term, one can generate variants of the accelerated proximal gradient method (with the convergence rate guaranteed) by considering wDGs for the regularization term. We have experimentally confirmed that some of these methods perform better than the accelerated gradient method for some objective functions. Second, if the problem is in that some wDGs seem to die for some objective functions (in our paper, (v) and (vi) in Thm 4.2 seem to work only for strongly-convex functions), we hope to claim that this does not immediately mean the limitation of our framework, but rather clarifies that some choices of wDG are only of limited use, which is informative for users of our framework. If both understandings are not what the reviewer intended, it would be greatly appreciated if the reviewer could kindly rephrase the criticism. Then we are more than happy to add further response. >Weakness 2. Thm 2.3: From this thm, it seems that the convergence rate can be arbitrarily good. Authors should discuss this here and mention that discretization error unables to converge faster than the proven lower bound 1/t^2. A. Thank you for pointing out this very important point. Yes, as the reviewer says, even though in the continuous ODE arbitrarily high convergence rate seems possible, it does not at all mean that we can draw discrete schemes (actual optimization methods) out of the ODE while keeping the same high order. What happens actually is that, greedily demanding a higher rate is penalized at the timing of discretization from the numerical stability. See, for example, Ushiyama, K., Sato, S., and Matsuo, T., Essential convergence rate of ordinary differential equations appearing in optimization, JSIAM Letters, 14 (2022). https://doi.org/10.14495/jsiaml.14.119 We would like to add a new remark after Thm 2.3 as follows. ``` Rem 2.X. From Thm 2.3 it might seem that we can achieve arbitrarily high order rate. Although it is surely true in the continuous context, it does not imply we can construct discrete optimization methods from the ODE. In fact, greedily demanding a higher rate is penalized at the timing of discretization from the numerical stability. See, for example, the discussion in Ushiyama--Sato--Matsuo (2022). ``` Thank you also for pointing out the typos. We would like to thank you again for your very careful reading. --- Rebuttal Comment 1.1: Title: Thank you Comment: I thank the authors for their answers. I only had a few minor concerns and authors acknowledged they will fix those in the camera ready version. I keep thinking this paper is nice and I would like to see it at NeurIPS.
null
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Mitigating the Effect of Incidental Correlations on Part-based Learning
Accept (poster)
Summary: This paper builds up a part-based learning algorithm for few-shot classification with components preventing incidental correlations during training. ViT is adopted as the architecture, which facilitates part interpretation by recognizing each image patch as a specific part/foreground/background. Experiments on several few-shot learning benchmarks show the effectiveness of the approach, and an additional experiment on ImageNet-9 verifies that the approach can indeed reduce incidental correlations during training. Strengths: + The proposed algorithm achieves interpretable representations to some degree, which is important for few-shot learning. + Image background has been shown harmful for few-shot learning at both training and test time, and this paper tackles the background problem at training time in a novel way. The experiments on ImageNet-9 clearly verify that the method can remove incidental correlations. + The proposed algorithm achieves SOTA performance on several few-shot learning benchmarks. Weaknesses: - Missing important ablation studies. (1) To verify the effectiveness of part-based learning, the authors need to show the performance of a vanilla ViT with the loss $L_{cls}$. (2) No ablation studies on the use of $L_{mix}$ and $L_{Q}$. Importantly, the authors do not show the baseline performance using part-based learning with $L_{cls}$ alone during pretraining (especially on few-shot learning benchmarks). Thus the usefulness of removing incidental correlations has not been verified. - There are confusing parts in the method description. For example, it is not clear if $z_p$ and $z_d$ are iteratively computed for every block of the network or just computed for the feature of the last block. The same confusion goes for distance maps and embedded patches. The architecture plot in Figure 2 does not answer this question (e.g., what's the input to the second block?). Some other confusions are listed in the questions part. minor: - A typo in line 188, it should be $n_f \times F^2 \times C$. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: - Why can Gaussian noise make mixture-of-parts robust to common data distortions? it seems its use is to relax the target $L_{mix}$ to be softer instead. - What's the exact form of $M_f$ and $M_b$? Are they just binary masks? Why $l_2$-norm is used in $L_{mix}$? Since the target is a mask, it seems cosine distance is more appropriate. - Please clarify the confusing parts listed in the weaknesses part. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: No limitations are listed in the paper. I suggest the authors conduct experiments on cross-domain datasets like tieredImageNet->MSCOCO (in Meta-Dataset). Such experiments can show clearer advantages of part-based methods and the removal of incidental correlations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the valuable comments and suggestions. We respond below to each of the concerns/suggestions. > Missing important ablation studies.... We perform an ablation analysis to investigate the impact of $L_{mix}$ and $L_Q$ during the pretraining phase on the MiniImageNet dataset. Additionally, we provide the baseline results achieved solely through $L_{cls}$, along with the pretraining performance of SMKD [1]: | Model | 1-shot % | 5-shot % | $\|P\|_1$ | $\|PP^T - I\|_1$ | |--|--|--|--|--| SMKD [1] | 60.93 | 80.38 | - | - $L_{cls}$ | 61.24$_{\pm 0.2}$ | 81.12$_{\pm 0.2}$ | 8.41$_{\pm 0.1}$ | 25.82$_{\pm 0.2}$ $L_{cls}+L_{mix}$ | 62.81$_{\pm 0.1}$ | 83.25$_{\pm 0.2}$ | 8.73$_{\pm 0.2}$ | 24.61$_{\pm 0.1}$ $L_{cls}+L_{mix}+L_Q$ | 62.15$_{\pm 0.2}$ | 82.95$_{\pm 0.2}$ | 0.35$_{\pm 0.1}$ | 0.56$_{\pm 0.2}$ The above table highlights the relevance of each component during the pretraining. > There are confusing parts in the method description.... During the architectural design phase, we employ distinct components ($\mathbf{P}$) for each encoder block. Consequently, $z_p$ and $z_d$ are calculated in an iterative manner and subsequently transmitted to the subsequent encoder block. Regarding the computation of $L_{mix}$, we utilize the distance maps (**D**) from the concluding encoder block. While it's feasible to calculate $L_{mix}$ iteratively for each block and then aggregate them for a final $L_{mix}$ computation, our observations indicate that this approach amplifies computational expenses and results in performance deterioration. As a result, we opt to compute $L_{mix}$ using the ultimate encoder block. We agree with the reviewer's assessment that the current draft lacks clarity in explaining the method. We intend to rectify this shortcoming in the forthcoming final version by revising the description accordingly. > Why can Gaussian noise make mixture-of-parts robust to common data distortions?.... Through empirical investigation, we ascertain that Gaussian noise is more effective in promoting resilience within the latent codes, while simultaneously preserving the interpratabilty of parts. In the subsequent table, we delve into the impact of salt-and-pepper and speckle noise during the pretraining phase: | Noise Type | 1-shot % | 5-shot % |--|--|--| Gaussian | 62.15$_{\pm 0.3}$ | 82.95$_{\pm 0.1}$ Salt-and-pepper | 62.21$_{\pm 0.2}$ | 82.75$_{\pm 0.3}$ Speckle | 61.60$_{\pm 0.2}$ | 81.52$_{\pm 0.1}$ Although there isn't a substantial distinction in the few-shot performance during the pretraining stage, we note that the utilization of salt-and-pepper and speckle noise fails to uphold the quality of parts. This, in turn, leads to a decline in the interpretability of part representations, making it advisable to abstain from their use. For qualitative analysis, please refer to figure 1 in the uploaded rebuttal pdf. > What's the exact form of $M_f$ and $M_b$? Are they just binary masks?... The segmentation masks $M_f$ and $M_b$ are composed of binary values. Our selection of the $L_2$-norm for $L_{mix}$ is informed by empirical observations. Additionally, we conducted experiments with alternative loss functions like L1 and cosine distance during the pretraining phase: | Mix Loss Type | 1-shot % | 5-shot % |--|--|--| L2-norm | 62.15$_{\pm 0.2}$ | 82.95$_{\pm 0.2}$ L1-norm | 58.73$_{\pm 0.3}$ | 79.51$_{\pm 0.2}$ Cosine | (nan loss) | (nan loss) While L1-norm results in a lower performance, the cosine distance results in unstable training with training loss reaching +infinity, eventually nan. We achieve the best performance with L2-norm. > No limitations are listed in the paper. I suggest the authors conduct experiments on cross-domain datasets like tieredImageNet-MSCOCO (in Meta-Dataset). Such experiments can show clearer advantages of part-based methods and the removal of incidental correlations. As suggested by the reviewer, we conduct a study on the cross-domain task. We use the setup from [2] and use MiniImageNet -> CUB evaluation (5-way-5-shot). For this analysis, we refer to the baseline performances reported in Table 9 by [2]. Moreover, we use the official code for ConstNet [3] and SUN (ViT-S) [4] to evaluate cross-domain tasks: | Method | Backbone | MiniImageNet -> CUB |--|--|--| Neg-Margin* | ResNet-18 | 67.3 ProtoNet* | ResNet-18 | 67.1 RelationNet* | ResNet-18 | 57.7 Baseline* | ResNet-18 | 65.5 Baseline++* | ResNet-18 | 64.3 Pos-Margin* | ResNet-18 | 64.9 MixtFSL [2] | ResNet-18 | 68.7 Sum-min [2] | ResNet-12 | 67.3 ConstNet [3] | ResNet-12 | 68.8 SUN (ViT-S) [4] | ViT-S | 72.1 **DPViT** | **ViT-S** | **77.1** \* results are reported from [2], Table 9. - [1] SMKD: Supervised masked knowledge distillation for few-shot transformers. (CVPR'23) - [2] Matching Feature Sets for Few-Shot Image Classification. (CVPR'22) - [3] Attentional constellation nets for few-shot learning. (ICLR'21) - [4] Self-promoted supervision for few-shot transformer. (ECCV'22) Moreover, we also include limitations of our work: - A constraint within our framework involves relying on a pre-existing foreground extractor. In certain scenarios, such as the classification of tissue lesions for microbiology disease diagnosis, obtaining an existing foreground extractor might not be feasible. - At present, DPViT focuses on learning components that are connected to the data, yet it doesn't encompass the connections between these components, like their arrangement and hierarchical combination. Introducing compositional relationships among these components could enhance comprehensibility and facilitate the creation of a part-based model capable of learning relationships among the parts. > A typo in line 188, it should be $n_f \times F^2\cdot C$ We appreciate your observation of this typographical error. We will make the necessary corrections. --- Rebuttal Comment 1.1: Comment: Thanks for the authors' response. The additional empirical studies addressed most of my concerns. I raise my score from 5 to 6. --- Reply to Comment 1.1.1: Comment: Thank you for acknowledging our response to your comments. We are happy that most of your concerns have been addressed.
Summary: The authors propose a strong part-based learning method for few-shot learning. Noticing the incidental correlations between part signals, the method proposed in the paper, named DP-ViT, learned to disentangle foreground and background parts. This is done by introducing a mixture-of-experts formulation of parts. To further encourage high-quality and diverse parts, the DP-ViT further employs a regularization for orthogonality.The method proposed by the authors achieves SOTA on multiple few-shot learning datasets, including MiniImagenet, TieredImageNet, and FC100. With the DP-ViT framework, the authors also illustrate the incidental correlation in imagery data and how DP-ViT improves the performance of models. Strengths: The authors have made sound improvements to two regularizations on part-based models. For me, these ideas make sense and the authors have demonstrated interpretation of these modules in the experimental sections. Additionally, it is nice to see that the proposed solutions improve the performance of few-shot learning. Weaknesses: I don't directly work on this topic, and I think the authors have provided adequate analysis and explanation on the few-shot learning part. However, I think this paper could have a more general impact. Please check the questions below. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. It looks to me that DP-ViT is a quite general improvement to the ViT architecture. However, to support its full utility, I think an experiment on ImageNet is needed, where DP-ViT can outperform a vanilla ViT or a part-based ViT baseline. 2. I personally would suggest adding an ablation study for the number of parts, K, especially what will happen when K is increased. It is interesting to see that K=64 parts is sufficient for a dataset. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: I haven't found such a section from the authors. But I tend to think that this work does not have a negative societal impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the valuable comments and suggestions. We respond below to each of the concerns/suggestions. > It looks to me that DP-ViT is a quite general improvement to the ViT architecture. However, to support its full utility, I think an experiment on ImageNet is needed, where DP-ViT can outperform a vanilla ViT or a part-based ViT baseline. As suggested by the reviewer, we conduct an experiment on ImageNet-1K, and compare it with the vanilla ViT-S model. | Model | Val Acc %| |------|------| | ViT-S | 54.5 %| |**DPViT** | **72.1** %| Please note that we train both models over 100 epochs using approximately 1 million training samples, employing a batch size of 512. For DPViT, we use the same set of hyperparameters that we used for ImageNet-9 dataset (please refer to Appendix Section 1.2). For ViT-S, we use Adam optmizer with learning rate 0.001 along with cosine scheduler (the code is adopted from **timm** library). Subsequently, we evaluate the models using the provided validation set consisting of 50,000 samples. Ideally, DPViT could undergo pretraining on an expansive dataset like ImageNet-21K or JFT300M, followed by fine-tuning on ImageNet-1K. Regrettably, due to limitations in time and resources, we were unable to carry out such extensive pretraining for both models. > I personally would suggest adding an ablation study for the number of parts, K, especially what will happen when K is increased. It is interesting to see that K=64 parts is sufficient for a dataset. We have reported an ablation with a varying number of parts K on MiniImageNet dataset in Appendix Section 1.6 (Table 2): | | K=32 | K=64 | K=96 | K=128 | |--|--|--|--|--| |$n_f$ | 1-s/5-s | 1-s/5-s | 1-s/5-s |1-s/5-s| | $n_f=K/2$ | $72.2_{\pm 0.2}$ / $87.8_{\pm 0.4}$ | $72.9_{\pm 0.5}$ / $88.1_{\pm 0.4}$| $72.1_{\pm 0.2}$ / $88.1_{\pm 0.4}$ | $72.1_{\pm 0.5}$/ $87.1_{\pm 0.5}$ | $n_f=2K/3$ | $72.2_{\pm 0.2}$/ $88.1_{\pm 0.4}$ |$73.8_{\pm 0.5}$ / $89.3_{\pm 0.4}$ | $73.1_{\pm 0.2}$ / $88.1_{\pm 0.4}$ | $72.2_{\pm 0.5}$ / $87.4_{\pm 0.5}$ |$n_f = 4K/3$ | $72.3_{\pm 0.2}$/ $88.4_{\pm 0.4}$ | $73.4_{\pm 0.5}$/ $88.5_{\pm 0.4}$ |$73.2_{\pm 0.2}$/ $87.9_{\pm 0.4}$ | $72.5_{\pm 0.5}$/ $87.8_{\pm 0.5}$ We observed that increasing $K$ beyond a certain threshold degrades the performance as computational complexity increases. > Limitations: I haven't found such a section from the authors. But I tend to think that this work does not have a negative societal impact. Limitations of our work: - A constraint within our framework involves relying on a pre-existing foreground extractor. In certain scenarios, such as the classification of tissue lesions for microbiology disease diagnosis, obtaining an existing foreground extractor might not be feasible. - At present, DPViT focuses on learning components that are connected to the data, yet it doesn't encompass the connections between these components, like their arrangement and hierarchical combination. Introducing compositional relationships among these components could enhance comprehensibility and facilitate the creation of a part-based model capable of learning relationships among the parts. --- Rebuttal Comment 1.1: Comment: The authors addressed most of my concerns. I also read the reviews from fellow reviewers, especially the reviewers with "rejection" ratings, and I temporarily keep my ratings. Thanks for pointing out the ablations for the number of parts (K), it makes sense to me. A follow-up: I appreciate adding the first experiment of comparing with ViT-S. However, the performance of ViT-S is not quite right (even under the 100 epochs setting). I have run this before myself and the accuracy is around 75%. I suggest using the [Deit](https://github.com/facebookresearch/deit) repository for training. Nonetheless, I understand that the authors have limited resources, and this is not a claimed contribution in the paper. So this won't affect my ratings. Please remember to figure this out for camera-ready if you get accepted in the future. --- Reply to Comment 1.1.1: Comment: We appreciate your reply. It's reassuring to see that many of the issues you raised have been taken care of. Following your recommendation, we intend to focus on improving the ViT-S outcomes in the upcoming version, particularly by utilizing the promising Deit framework. Please let us know if you need any other information from our side.
Summary: The paper addresses the impact of incidental correlations on part-learning and proposes several regularization methods to mitigate this. The first regularization separates foreground and background to guide the part-based learning towards relevant input regions. To this end, the paper proposes a mixture of parts formulation to collect latent codes into masks and supervise them (weakly) with the pretrained/unsupervised foreground extractor. Then, the second regularization is employed to impose invariance to background variation. Additionally, sparsity and orthogonality regularization is used to refine part representations. Strengths: 1. The paper tackles an important problem. Part-based methods are becoming more and more valuable nowadays due to increased interpretability 2. The paper points out the failure mode of part-based learning 3. The text is nicely written and easy to follow. Figures nicely highlight most relevant findings. Weaknesses: 1. The choice of sparse and spectral norms is not well explained in my opinion. I understand the need for sparsity in the image space when training part-based models, but in the paper, sparsity is enforced on the matrix of part representations instead. What is the physical meaning of this? Also, why is the spectral norm chosen to optimize for orthogonality? It is not evident straight away why would it be a better alternative to just L2 / Frobenius. 2. Why the background model is needed as a separate entity? Is modeling just the foreground (the background is then all the rest of the image naturally) not sufficient? When the background is modeled explicitly as a mixture of parts, then the part-model should learn to encode all background patches as well, right? This looks like an unnecessary complication. 3. The whole framework consists of 3 regularizers, each with its weighting. This introduces more hyper-parameters to the model and increases resources needed to optimize the performance. With this, it is not clear if the (rather limited) performance improvement provided by the method is worth the increased demand for hyper-parameter optimization. 4. Most of the methods in experimental evaluation are built on top of different backbones. It makes it harder to disentangle improvement from regularization and improvement from the backbone. It would make sense to use the model trained with L_{cls} only as a baseline for comparison in Table 1 and Table 2. UPD: Authors addressed most of my concerns in the rebuttal. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: I suggest authors address weaknesses for the rebuttal. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 3 good Contribution: 3 good Limitations: I suggest authors elaborate on hyper-parameter optimization for the proposed framework. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the valuable comments and suggestions. We respond below to each of the concerns/suggestions. > The choice of sparse and spectral norms is not well explained in my opinion. I understand the need for sparsity in the image space when training part-based models, but in the paper, sparsity is enforced on the matrix of part representations instead. What is the physical meaning of this? ... We have explained the need of sparse and orthogonal norm in the context of our work in L57-L65 : “*Sparsity ensures that only a few parts are responsible for a given image, as images comprise a small subset of parts. Conversely, diversity in part representations prevents the parts from converging into a single representation and facilitates each part’s learning of a unique data representation*”. The need of sparsity is to select a subset of parts from the part-matrix for a given sample. Similarly, the need of orthogonality is to impose diversity in the foreground/background parts. Moreover, the choice of spectral norm is explained in L200-L213 : “ *One solution is to enforce orthogonality on the matrix $\mathbf{P}^{m\times n}$ by minimizing $||\mathbf{P}^T\mathbf{P} - \mathbf{I}||$. However, the solution will result in a biased estimate as $m<n$; that is, the number of parts ($K$) is always less than the dimensionality of parts ($F^2\cdot C$). In our experiments, we observed that increasing $K$ beyond a certain threshold degrades the performance as computational complexity increases. (Please refer to our Appendix section 1.6 (Table 2) for experiments on the different values of $K$). To minimize the degeneration of parts, we design our quality assurance regularization by minimizing the spectral norm of $\mathbf{P}^T\mathbf{P} - \mathbf{I}$, and by adding $L_1$ sparse penalty on the part-matrix $\mathbf{P}$. The spectral norm of $\mathbf{P}^T\mathbf{P} - \mathbf{I}$ has been shown to work with over-complete ($m<n$) and under-complete matrices ($m\geq n$)* [1].” We structure our power iterative spectral norm methodology based on the framework presented in [1]. It's important to emphasize that the study conducted in [1] concentrates on ensuring orthogonality across all weights within the neural network. Conversely, in our approach, we employ spectral norm to introduce diversity among the matrices associated with foreground and background components. For a comprehensive verification of the spectral norm concept, please refer to [1]"Can we gain more from orthogonality regularizations in training deep networks?" (NeurIPS 2018). >Why the background model is needed as a separate entity? Is modeling just the foreground (the background is then all the rest of the image naturally) not sufficient? ... The background parts provides essential contextual information during training, which is cruical for learning interpretable part representations. To verify this, we conduct an ablation study where n_f = K, and n_b = 0. Furthermore, we switch off the background influence on $L_{mix}$ by modifying the Equation 5 and 6 as: $\mathcal{L}_{mix} = || \mathcal{I}(L_F) - \mathcal{M}_f||_2$ $\mathcal{L}_{Q} (\lambda_s, \lambda_o) = \lambda_s ||\mathbf{P} ||_1 + \lambda_o \Big[\sigma \big(\mathbf{P_F} \cdot \mathbf{P_F}^{T} - \mathbf{I} \big) \Big] $ | Design |1-shot %| 5-shot % | $\|P\|_1$ | $\|PP^T - I\|_1$ | | -------- | -------- | -------- | -- | -- | | Foreground |72.81$_{\pm{0.15}}$ |88.21$_{\pm{0.18}}$ | 0.35$_{\pm{0.21}}$| 0.99$_{\pm{0.11}}$ | | Foreground+Background |73.05$_{\pm{0.15}}$ | 88.56$_{\pm{0.18}}$ | 0.32$_{\pm{0.21}}$| 0.32$_{\pm{0.17}}$ | The foreground-only model exhibit comparable few-shot performance to foreground+background model, but the acquired components lack diversity, resulting in reduced interpretability. This is evident from the higher orthogonal-norm value observed in the case of the foreground-only model. >The whole framework consists of 3 regularizers, each with its weighting. This introduces more hyper-parameters to the model and increases resources needed to optimize the performance. ... Although our framework incorporates multiple hyperparameters, extensive hyperparameter tuning is not required. As outlined in Appendix Section 1.2, we maintain consistency in most hyperparameters across all employed datasets. Specifically, we set $\lambda_{cls}$ = 1, $\lambda_{s}$ = 0.1, $\lambda_o$ = 0.1, $\lambda_{cls}^{inv}$ = 1, and $\lambda_p^{inv}$ = 0.5 for all experiments conducted on MiniImagenet, TieredImageNet, FC100, Imagenet-9, and Imagenet-1K datasets. For a more comprehensive understanding of our approach to hyperparameter optimization, kindly refer to Appendix Section 1.2. >Most of the methods in experimental evaluation are built on top of different backbones. It makes it harder to disentangle improvement from regularization and improvement from the backbone. It would make sense to use the model trained with L_{cls} only as a baseline for comparison in Table 1 and Table 2. For a better evaluation of our work, we conduct experiments with the $L_{cls}$ baseline. As suggested by the reviewer, we compare $L_{cls}$ model with DPViT in Table 1 and Table 2 as follows: Table1: | | MiniImageNet | TieredImageNet | FC100 | | -------- | -------- | -------- | -------- | | Model |1-shot/5-shot | 1-shot/5-shot | 1-shot/5-shot| | $L_{cls}$ |$72.15_{\pm{0.20}}$/87.61$_{\pm{0.15}}$ | $78.03_{\pm{0.19}}$/89.08$_{\pm{0.19}}$ | $48.92_{\pm{0.13}}$/67.75$_{\pm{0.15}}$| | DPViT |$73.81_{\pm{0.17}}$/89.85$_{\pm{0.18}}$ | $79.32_{\pm{0.19}}$/91.92$_{\pm{0.20}}$ | $50.75_{\pm{0.20}}$/68.80$_{\pm{0.15}}$| Table 2: | Method | IN-9L | Org |M-SAME |M-RAND |BG-GAP| |--------| -------- | -------- | -------- | -------- | -------- | |$L_{cls}$ | 95.1$_{\pm{0.21}}$ | 97.2$_{\pm{0.25}}$ | 91.5$_{\pm{0.19}}$ | 81.7$_{\pm{0.15}}$ | 9.2$_{\pm 0.2}$ | |DPViT | 96.9$_{\pm{0.15}}$ | 98.5$_{\pm{0.20}}$ | 93.4$_{\pm{0.19}}$ | 87.5$_{\pm{0.23}}$ | 5.9$_{\pm 0.2}$ | --- Rebuttal Comment 1.1: Comment: Authors addressed most of my concerns. Can authors elaborate more on the sparsity part? “Sparsity ensures that only a few parts are responsible for a given image, as images comprise a small subset of parts." <- This I understand, but in the paper the sparsity is enforced on the whole matrix of part representations, meaning it does not regularize the number of parts that an image is comprised of, but it rather regularizes the sparsity of individual part representations, which is not the same. --- Reply to Comment 1.1.1: Comment: Thank you for acknowledging our response. We are happy that most of your concerns have been addressed. In our implementation, we employ a modified form of a sparse constraint. We utilize the part matrix $P$ consisting of $K$ parts, structured as $K\times F^2 \cdot C$, where we arrange data along the rows and compress the columns (averaging each row). This operation results in a one-dimensional row vector with a dimension of ($K\times 1$) where each dimension indicates the contribution of the corresponding part vector. Ultimately, we apply our sparse constraint to the column vector ($1 \times K$). Hope this clarifies. If there is any more information we can provide, please let us know.
Summary: The paper proposes to produce a more robust and interpretable part-based learning method for image classification problems (though, probably extensible to other tasks). The main contributions include 1) the Disentanglement of foreground/background regions via a weakly supervised loss and 2) the Use of sparse and orthogonality constraints to discourage degenerate solutions 3) Finetuning using invariance constraint to ensure predictions are invariant for the learned background parts. The experiments show competitive results compared to recent methods and provide (arguably) more interpretable results. Strengths: [S1] Clearly motivated and technically sound: The paper addresses a well-known problem of spurious correlations in neural networks. While there are multiple types of spurious correlations, the paper only focuses on spurious correlations stemming from foreground and background regions. While that makes it more limited, it also provides clearer motivation, and the solution proposed can also address the specific problem better. Indeed, the proposed solutions appear technically sound and well-designed to discourage correlations which are also well corroborated by the results. [S2] Adequate details/ablations: There are many parts to the proposed method, but the paper has done a reasonably good job with ablations and discussion of the effects of different components. Weaknesses: [W1] Some details are slightly unclear: The pretraining portion is mostly quite clear but I have some clarity issues with the finetuning (distillation) portion of the proposed method. I am not sure I fully grasp how Eqn 8 part 2, ensures the fg code captures details in the absence of background. Part 1 is clearer but still needs a bit more details (i.e., if the goal is to make predictions invariant of background, why distill from a teacher model that has access to both? I assume that the choice is made based on empirical results; e.g., fg only model might not converge well), but this needs to be discussed in more detail. [W2] Effects of error propagation from foreground/background "weak supervision": The existing solution requires access to a model that can segment/divide images into fg and bg during training. While I agree that such models are easy to use and are widely available, the paper is unclear about what the effects of the quality of the underlying model are on the proposed method. The quality of fg/bg themselves might have issues with correlations. This could perhaps be simulated by using corruption techniques to see how the quality of foreground/background segmentation affects proposed methods and also get an idea of how this method might fare with newer/future methods for fg/bg segmentation. L333 studies the availability of masks but not whether a noisy/incorrect mask might affect the model. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: - Why is the orthogonality constraint applied to the whole matrix rather than foreground/background separately? Is this done for practical reasons (i.e., it would be difficult to implement) or is there a motivation behind this choice? - Please see W1. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: The paper aims to reduce spurious correlations that can help mitigate some biases/lack of generalization in neural networks. I do not see an explicit potential for negative societal impact. The limitation section is missing and could include technical limitations/assumptions behind the work (Does not address intra-fg spurious correlations, requires access to masks, etc..), but is not a deal-breaker. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the valuable comments and suggestions. We respond below to each of the concerns/suggestions. > [W1] Some details are slightly unclear: The pretraining portion is mostly quite clear but I have some clarity issues with the finetuning (distillation) portion of the proposed method. I am not sure I fully grasp how Eqn 8 part 2, ensures the fg code captures details in the absence of background. Part 1 is clearer but still needs a bit more details (i.e., if the goal is to make predictions invariant of background, why distill from a teacher model that has access to both? I assume that the choice is made based on empirical results; e.g., fg only model might not converge well), but this needs to be discussed in more detail. In the process of fine-tuning, the initial component of Equation 8 (referred to as $L_{cls}^{inv}$) guarantees that the student model acquires pertinent foreground insights from a teacher model equipped with access to both foreground and background information. This design decision further guarantees the incorporation of pertinent background insights during the learning of representations. As recommended, we carry out an empirical investigation to examine the impact of knowledge distillation from a teacher model possessing solely foreground information: |Architecture |1-shot % | 5-shot %| |--------| -------- | -------- | |Distill only foreground |73.69$_{\pm{0.23}}$ | 88.51$_{\pm{0.17}}$ | |Distill foreground + background |73.81$_{\pm{0.21}}$ |89.85$_{\pm{0.20}}$| As demonstrated in the table above, the teacher model equipped with the ability to distill knowledge from both foreground and background information exhibits slightly superior performance compared to the model that solely accesses foreground-related information. During the finetuning process, we abstain from employing the mixture loss due to the utilization of learned latent codes for generating foreground masks. The incorporation of $L_p^{inv}$ essentially confines the foreground elements within the foreground space. Alternatively, another approach would involve utilizing the foreground masks to calculate the mixture loss (akin to the pretraining stage). If this route is taken, the inclusion of the $L_p^{inv}$ loss becomes unnecessary when fine-tuning. Through empirical testing, we have confirmed that both of these design options yield comparable performance outcomes. > [W2] Effects of error propagation from foreground/background "weak supervision": The existing solution requires access to a model that can segment/divide images into fg and bg during training. While I agree that such models are easy to use and are widely available, the paper is unclear about what the effects of the quality of the underlying model are on the proposed method. The quality of fg/bg themselves might have issues with correlations. This could perhaps be simulated by using corruption techniques to see how the quality of foreground/background segmentation affects proposed methods and also get an idea of how this method might fare with newer/future methods for fg/bg segmentation. L333 studies the availability of masks but not whether a noisy/incorrect mask might affect the model. We agree with the reviewer that studying the quality of foreground/background segmentation could be simulated by adding corruptions to the ground truth masks. We conduct an ablations studying the effect of corrupting the ground truth masks using the Gaussian noise with a manual corruption strength ($\lambda_c$) : $M_f = \lambda_c \delta + M_f; M_b = \lambda_c \delta + M_b; \delta \sim N(0,1)$. | | $\lambda_c$=0.1 | $\lambda_c$=0.5 | $\lambda_c$=1.0 | | -------- | -------- | -------- | -------- | | Method |1-shot/5-shot | 1-shot/5-shot | 1-shot/5-shot| | DPViT | $73.05_{\pm{0.15}}$ / 88.56$_{\pm{0.18}}$ | $72.83_{\pm{0.20}}$/ 87.92$_{\pm{0.19}}$ | $72.12_{\pm{0.17}}$/ 87.51$_{\pm{0.15}}$ | As depicted in the Table above, DPViT displays a degree of robustness against minor to moderate corruptions in segmentation masks (<=0.5). However, once this threshold is surpassed, the interpretability of DPViT experiences a marked decline. We substantiated this observation through qualitative analysis. Please refer to Figure 2 and 3 in the uploaded rebuttal pdf for qualitative analysis. > Why is the orthogonality constraint applied to the whole matrix rather than foreground/background separately? Is this done for practical reasons (i.e., it would be difficult to implement) or is there a motivation behind this choice? We would like to clarify that the orthogonality is applied to foreground and background parts separately rather than the whole matrix (as indicated by Equation 6): $\mathcal{L}_{Q} (\lambda_s, \lambda_o) = \lambda_s ||\mathbf{P} ||_1 + \lambda_o \Big[\sigma \big(\mathbf{P_F} \cdot \mathbf{P_F}^{T} - \mathbf{I} \big)+ \sigma \big( \mathbf{P_B} \cdot \mathbf{P_B}^{T} - \mathbf{I}\big) \Big] $ > Limitations: The paper aims to reduce spurious correlations that can help mitigate some biases/lack of generalization in neural networks. I do not see an explicit potential for negative societal impact. The limitation section is missing and could include technical limitations/assumptions behind the work, but is not a deal-breaker. Limitations of our work: - A constraint within our framework involves relying on a pre-existing foreground extractor. In certain scenarios, such as the classification of tissue lesions for microbiology disease diagnosis, obtaining an existing foreground extractor might not be feasible. - At present, DPViT focuses on learning components that are connected to the data, yet it doesn't encompass the connections between these components, like their arrangement and hierarchical combination. Introducing compositional relationships among these components could enhance comprehensibility and facilitate the creation of a part-based model capable of learning relationships among the parts. --- Rebuttal Comment 1.1: Title: A gentle reminder (Reviewer DtGy) Comment: Thank you again for your thoughtful review. Does our response help address your concerns? We would appreciate the opportunity to engage further if needed. --- Reply to Comment 1.1.1: Title: Final kind reminder (Reviewer DtGy) Comment: Since the author-reviewer discussion period concludes on Monday, we're interested in confirming whether our response adequately addressed the reviewers' paper-related concerns. Please inform us if you require additional clarification, and we will make every effort to provide a response by tomorrow. Once again, we appreciate your feedback and the time you've dedicated to the review.
Rebuttal 1: Rebuttal: We thank all reviewers for their positive feedback: proposed method tackles an important problem in a novel way with increased interpretability in representations [gyRR, Tpfu]; is clearly motivated and is technically sound [DtGy, QGVs]; achieves state-of-the-art performance with improvements over existing methods [QGVs, Tpfu]; has adepquate details/ablations [DtGy]; is nicely written and easy to follow [gyRR]. We respond to the comments of each reviewer individually below. Pdf: /pdf/928c5dde682b4f9a4fac612b60d4a31e9bce5a9b.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Near-Optimal $k$-Clustering in the Sliding Window Model
Accept (poster)
Summary: This paper proposes the first near-optimal $(1+\varepsilon)$-approximation algorithm for $(k, z)$-clustering problem in the sliding window model. The core part is the $(1+\varepsilon)$-coreset of $(k, z)$-clustering in the sliding window model which is based on an online coreset algorithm for $k$-clustering problem. This paper gives strict proof for the proposed algorithm and experimental results manifest its efficiency. In short, the theory aspect of this paper is solid enough. Strengths: (1) This paper gives the first $(1+\varepsilon)$-approximation algorithm for $(k, z)$-clustering in the sliding window, which improves the existing results in two aspects including accuracy and space. (2) The space $\frac{k}{\min (\varepsilon^4, \varepsilon^{2+z}) }$ of $(1+\varepsilon)$-coreset for $(k, z)$-clustering in the sliding window almost matches the lower bound $\Omega \left(\frac{k}{\varepsilon^2} \log{n}\right)$. (3) Strict proof is provided and experimental results are sufficient. (4) The paper is well-written and well-organized. Weaknesses: (1) Some symbol is not explained, like $[ \Delta ]^d$. (2) The merge-and-reduce framework lacks some citations. Technical Quality: 3 good Clarity: 3 good Questions for Authors: In Line 92-93, Theorem 1.1 presents the word space $\frac{k}{\min(\varepsilon^4, \varepsilon^{2+z})} \text{poly} \log \frac{n \Delta}{\varepsilon}$. For $k$-median and $k$-means, $z$ is equal to $1$ and $2$ respectively. No matter $z$ is $1$ or $2$, the space should be $\frac{k}{\varepsilon^4} \text{poly} \log \frac{n \Delta}{\varepsilon}$ since $\varepsilon \in (0, 1)$. How can you achieve $\frac{k}{\varepsilon^2} \text{poly} \log \frac{n \Delta}{\varepsilon}$? In addition, for $(k, z)$-clustering, $z=1$ does not correspond to $k$-median when $\mathrm{dist}$ represents the Euclidean distance, since $k$-median is $\ell_1$-norm. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: See Weaknesses and Questions. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > Some symbol is not explained, like $[\Delta]^d$. We will clarify that $[\Delta]^d$ means that each of the $d$ coordinates of each point must lie within $\{1,\ldots,\Delta\}$. > The merge-and-reduce framework lacks some citations. We will add references to previous work that use the merge-and-reduce framework. > No matter $z$ is $1$ or $2$, the space should be $\frac{k}{\varepsilon^4}\text{poly} \log\frac{n\Delta}{\varepsilon}$ since $\varepsilon\in(0,1)$. How can you achieve $\frac{k}{\varepsilon^2}\text{poly} \log\frac{n\Delta}{\varepsilon}$? We emphasize that our results do not claim to achieve $\frac{k}{\varepsilon^2}\text{poly} \log\frac{n\Delta}{\varepsilon}$ words of space but rather $\frac{k}{\varepsilon^{z+2}}\text{poly} \log\frac{n\Delta}{\varepsilon}$ words of space. Moreover, in light of an $\Omega\left(\frac{k}{\varepsilon{2+z}}\log n\right)$ lower bound by [41], it is not possible to achieve $\frac{k}{\varepsilon^2}\text{poly} \log\frac{n\Delta}{\varepsilon}$. > In addition, for $(k,z)$-clustering, $z=1$ does not correspond to $k$-median when $\text{dist}$ represents the Euclidean distance, since $k$-median is $\ell_1$-norm. Since $(k,z)$-clustering is a general definition that is the sum of the $z$-th power of the distances, then $k$-median with the Euclidean distance is the sum of the Euclidean distances. Nevertheless, $k$-median is also well-defined when the underlying distance is the Manhattan distance, though this setting is beyond the current scope of our paper. --- Rebuttal Comment 1.1: Comment: I don't think you answer my questions directly. (1) In Theorem 1.1, the words of space is $\frac{k}{\min(\epsilon^4, \epsilon^{2+z})} \text{polylog}\frac{n \Delta}{\epsilon}$. In Line 92-93, no matter $z=1$ or $z=2$, it would be $\frac{k}{\epsilon^4} \text{polylog}\frac{n \Delta}{\epsilon}$ for $\epsilon \in (0, 1)$, right? However, this paper claims that it achieves $\frac{k}{\epsilon^2} \text{polylog}\frac{n \Delta}{\epsilon}$ words of space! (2) In your definition of $(k, z)$-clustering, its the summation of $\ell_2$-norm, but $k$-median is the summation of $\ell_1$-norm. Please keep strict. --- Reply to Comment 1.1.1: Comment: Thanks for the follow-up! We provide specific responses to your points below -- please let us know if you have any further questions. 1) Ah yes, thanks for catching the typo! Line 92-93 should claim a space bound of $\frac{k}{\varepsilon^4}\text{polylog}\frac{n\Delta}{\varepsilon}$ rather than $\frac{k}{\varepsilon^2}\text{polylog}\frac{n\Delta}{\varepsilon}$. Note that this is consistent with Theorem 1.1 for $z=1$ and $z=2$ (as well as more general values of $z$), since $\min(\varepsilon^4,\varepsilon^{2+z})=\varepsilon^4$ for $z\le 2$ as you pointed out. Moreover, this is consistent with Table 1, the claimed bounds in the abstract, and the lower bound of $\frac{k}{\varepsilon^{2+z}}$ by [41]. 2) We would like to emphasize that $(k,z)$-clustering is defined as the sum of the $z$-th power of the distances in some fixed metric, e.g., see [23, 24, 41, 42, 47]. Thus when the fixed metric is the Euclidean distance, then $k$-median, which corresponds to $z=1$, is the sum of the Euclidean distances, i.e., the sum of the $\ell_2$ distances, NOT the sum of the $\ell_1$ distances. For example, note that [42] furthermore explicitly defines $(k,z)$-clustering as the sum of the $z$-th power of the *Euclidean* distances. We also remark that it is possible to study this problem for all ranges of $z$ when the underlying metric is the Manhattan instance, i.e., the $\ell_1$ norm. However, this setting is beyond the current scope of our paper.
Summary: This paper studies the $k$-clustering problem in sliding window model. In sliding window model, a window of size $W$ is given to capture the most recent $W$ updates in the stream, where good clustering approximation should be maintained in the window with small space complexity. In previous work, in order to achieve an $(1+\epsilon)$ approximation for $(k,z)$-clustering problem in sliding window, a space complexity of $\frac{kd+d^{Cz}}{\epsilon^3}polylog(W,\Delta,\frac{1}{\epsilon})$ for some constant $C \ge 7$. In this work, the authors propose a coreset-based method, which gives an $(1+\epsilon)$-approximation for the $(k,z)$-clustering problem in sliding window model with space complexity $\frac{k}{min(\epsilon^4,\epsilon^{2+z})}polylog(\frac{n\Delta}{\epsilon})$, which is independent of the window size and nearly matches the lower bound of the space used by the offline coreset construction. Strengths: 1. The presented result significantly improves the space complexity of previous work with $(1+\epsilon)$-approximation guarantee on the clustering quality in sliding window model. Besides, this is the first approximation result that can achieve a space complexity independent of the window size $W$. In practical settings, since $\Delta$ is usually assumed to be bounded by $poly(n)$, the presented result can nearly match the lower bound space complexity required for any $(1+\epsilon)$-online coreset construction for $(k,z)$-clustering problem. 2. The extensive experimental results on real world datasets show that the proposed method is more efficient than previous ones. Weaknesses: 1. The techniques used in this paper seem to rely heavily on the current SOTA method for offline coreset construction [1] (using ring structures and independent sampling method) and the consistent approximation scheme proposed by Meyerson et al. [2]. 2. The challenges for merge and reduce operations is not discussed comprehensively in the literature. [1] Adam Meyerson. Online facility location. In 42nd Annual Symposium on Foundations of Computer Science, FOCS, pages 426–431. IEEE Computer Society, 2001. [2] Vincent Cohen-Addad, Kasper Green Larsen, David Saulpic, and Chris Schwiegelshohn. Towards optimal lower bounds for k-median and k-means coresets. In STOC ’22: 54th Annual ACM SIGACT Symposium on Theory of Computing, pages 1038–1051, 2022. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: 1. It seems that the $O(log\Delta)$ dependence on the space complexity comes from the Meyerson approach (for obtaining the consistent approximation), where a guess-and-double process is used to obtain an estimation of the cost of an optimal solution. Is the $O(log\Delta)$ term necessary in the proposed method and is there any other method that can avoid guessing $O(log\Delta)$ times for the optimal clustering cost (for example by calling a single-criteria approximation algorithm with large approximation ratio to serve as the upper bound for optimal clustering cost)? 2. Can the authors explain more about the challenges when directly applying the reduce and merge method for each block of the divided data points in a stream without using an "inverse" operation? 3. I am curious about whether, in the sliding window model, the space complexity is more critical to consider compared to the time complexity. As I am not very familiar with this model, I would appreciate some insights on this matter. Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: Since this is mainly a theoretical work, I don't think there is potential negative societal impact of this work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > The techniques used in this paper seem to rely heavily on the current SOTA method for offline coreset construction [1] (using ring structures and independent sampling method) and the consistent approximation scheme proposed by Meyerson et al. [2]. Our main algorithm utilizes (1) a framework for sliding window algorithms using a new randomized online coreset construction along the lines of [8], (2) an online version of the coreset construction of [24], and (3) a consistent assignment across all times of the stream through an online facility location argument in the style of [30]. Although none of these steps are particularly complicated, we believe that developing the correct variations of each step and putting them together in the correct manner is fairly novel/technical, e.g., the long line of previous work for clustering on sliding windows has missed this approach. > It seems that the $O(log\Delta)$ dependence on the space complexity comes from the Meyerson approach (for obtaining the consistent approximation), where a guess-and-double process is used to obtain an estimation of the cost of an optimal solution. Is the $O(log\Delta)$ term necessary in the proposed method and is there any other method that can avoid guessing $O(log\Delta)$ times for the optimal clustering cost (for example by calling a single-criteria approximation algorithm with large approximation ratio to serve as the upper bound for optimal clustering cost)? While our lower bound shows that an $\Omega(\log n)$ dependence is necessary for any online coreset, it is indeed unclear whether an $O(\log\Delta)$ dependence is necessary. We leave this as a great question for future work. > The challenges for merge and reduce operations is not discussed comprehensively in the literature...Can the authors explain more about the challenges when directly applying the reduce and merge method for each block of the divided data points in a stream without using an "inverse" operation? The main barrier is that existing merge-and-reduce frameworks using coresets do not handle the implicit deletions of the sliding window model. Thus we instead show that the existing coreset construction of [1] can be modified in an online manner, and then show that there exists a corresponding merge-and-reduce framework for online coresets. > I am curious about whether, in the sliding window model, the space complexity is more critical to consider compared to the time complexity. As I am not very familiar with this model, I would appreciate some insights on this matter. Yes, in the sliding window model, including the more specific streaming model, the main quantity of interest in previous work is the space complexity, though considerations for time complexity should be optimized as much as possible. Towards that end, we remark that the calculations of the sampling probailities of our algorithm only require computing the distance of each point to a number of centers, as well as tracking the number of points in each ring/group so far. Thus, our algorithm can be efficiently implemented, and has a small polynomial running time. --- Rebuttal Comment 1.1: Comment: I would like to thank the authors for their response and clarification. I think this is an interesting paper.
Summary: This paper studies $k$-means and $k$-median clustering in the sliding window, and proposes an $(1+\varepsilon$)-approximation algorithm on the top of an coreset maintained through the stream. The space complexity is roughly $k/\text{poly}(\varepsilon) \cdot \text{poly}\log n$ where $n$ is the number of points, coming together with an nearly-tight lower bound. Strengths: - The paper is well-written in general, most parts are clear and well-organized. - The results are interesting and significant. I did not read the entire proof, but checking the key steps convince me the result should be correct. I am happy to go back to proof details if any concern comes up. - It is a plus for a theory paper to have experiments. The space consumption is reduced a lot with some compromise on the clustering performance. Weaknesses: - I am confused by line 388 389, it seems that the window size is too close to the stream size. Though theoretically the window size is not an important parameter, but doing this converges back to the streaming model. - After staring at it for a while, I do not think either of the algorithm or lower bound applies to the $k$-center clustering. Then is it OK to have $k$-clustering as in the title? The authors can either convince me this result holds / has impact on $k$-center clustering, or show me a bunch of work that does not include $k$-center in $k$-clustering. - I think it is good to make clear of the notation, especially when it first appears. In the abstract, maybe clarify what $z, \Delta$ is. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - Will the codes be made public? - Can you add an open problem section? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: I would appreciate this as a basically-complete theory work. There are some limitations on the experiments, but not a hurdle on assessing the contribution of this paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > I am confused by line 388 389, it seems that the window size is too close to the stream size. Though theoretically the window size is not an important parameter, but doing this converges back to the streaming model. We remark that the window size was set rather close to the stream size in our experiments as a simple proof-of-concept where sampling-based algorithms demonstrate better performance than histogram-based algorithms. We believe that more general datasets with different settings for window size and stream size can also be constructed so that sampling-based algorithms demonstrate better performance than histogram-based algorithms. > After staring at it for a while, I do not think either of the algorithm or lower bound applies to the $k$-center clustering. Then is it OK to have $k$-clustering as in the title? The authors can either convince me this result holds / has impact on $k$-center clustering, or show me a bunch of work that does not include $k$-center in $k$-clustering Examples of works that do not include $k$-center in $k$-clustering are listed below. Nevertheless for the sake of clarity, we can change the title to "Near-Optimal Clustering in the Sliding Window Model" if permitted. Artur Czumaj, Guichen Gao, Shaofeng H.C. Jiang, Robert Krauthgamer, Pavel Vesely: Fully Scalable MPC Algorithms for Clustering in High Dimension. CoRR abs/2307.07848 (2023) Yecheng Xue, Xiaoyu Chen, Tongyang Li, Shaofeng H.-C. Jiang: Near-Optimal Quantum Coreset Construction Algorithms for Clustering. CoRR abs/2306.02826 (2023) Sayan Bandyapadhyay, Fedor V. Fomin, Tanmay Inamdar: Coresets for Clustering in Geometric Intersection Graphs. SoCG 2023: 10:1-10:16 Lingxiao Huang, Shaofeng H.-C. Jiang, Jianing Lou, Xuan Wu: Near-optimal Coresets for Robust Clustering. ICLR 2023 Vincent Cohen-Addad, Alessandro Epasto, Vahab Mirrokni, Shyam Narayanan, Peilin Zhong: Near-Optimal Private and Scalable $k$-Clustering. NeurIPS 2022 > I think it is good to make clear of the notation, especially when it first appears. In the abstract, maybe clarify what $z,\Delta$ is. Thanks for the suggestion. We will clarify in the abstract that $z$ is the power of the distance in the cost function and $\Delta$ is the length of the grid on which the points lie. > Will the codes be made public? Yes, we will upload the code to a public repository (and hopefully it should already be accessible in the supplementary material). > Can you add an open problem section? Yes, we will add a conclusion that both summarizes our contributions, as well as provide directions for future work. > I would appreciate this as a basically-complete theory work. There are some limitations on the experiments, but not a hurdle on assessing the contribution of this paper. Thanks for the feedback! We can move the experiments to the appendix in the full version so that the main body focuses on the theory aspects of the paper. --- Rebuttal Comment 1.1: Comment: Thank the authors for the response. At this moment, I am happy with most issues. But on the window size in the experiments, I still think having a generic window size $W$ (actually try with different $W$) is important to demonstrate the performance remains immune to $W$, which corroborates with the result that $W$ does not contribute to the sample complexity. This does not violate the motivation of showing sampling-based algorithms perform better than histogram-based algorithms. More importantly, we should be aware of a recent result (https://research.google/pubs/pub52480) breaking the barrier of $\log(n\Delta)$ memory of words. While the result of this paper lies in $O(\text{polylog}(n\Delta))$. It is a bit pity that the full version is not yet available, it would be interesting to follow up if their technique can be used for sliding window model. I will maintain my evaluation on acceptance.
Summary: The paper proposes an near-optimal algorithm to build (1+$\epsilon$)-online coreset for k-clustering problem, and apply it to the sliding window model. The (near) optmality is established by a matching lower bound, also proved in the paper. A coreset is a compression of a dataset, such that any clustering centers will induce similar cost over the coreset and the original dataset. The (insertion-only) online coreset problem aims to build a coreset incrementally for a data stream, such that the coreset property (i.e., cost approximation) holds at any time point The main algorithm borrows lots of ideas from paper [24] ("*A New Coreset Framework for Clustering*"), which proposes a framework for building (offline) coresets and can be roughly described as follows: 1. Given a dataset, find a constant-approx clustering solution $A$ on it. 2. For each cluster induced by $A$, divide it into rings by distance to the center, such that all points in a same ring have roughly the same distance to the center. 3. Group rings at the same level from each cluster, then do importance sampling on the group. The sampled points (with scaled weights) will be the output coreset. For analysis, [24] shows that: as long as you can find a small "$A$-approximate centroid" $C$ (which is essentially something like a $\epsilon$-net restricted to part of the dataset), then you can have a small coreset. The paper essentially adapts the algorithm from [24] to the online setting 1. To get an constant-approx clustering in the online setting, the paper adopts a bi-criteria variant of Meyerson's algorithm for online facility location. Here the algorithm could produce more than k centers, but this is not an issue for bounding the coreset size since it only adds a constant factor. 2. The importance sampling step is mostly the same as described before, just in an online fashion. This gives the online coreset construction. 3. Finally, to apply it in the sliding window model, the paper breaks the input stream into blocks, then apply the online coreset construction in the reverse order on each block, and merge each block's coreset also in the reverse order (and keep only those within the window). I don't have time to look into the details of the lowerbound construction, however, it appears to be an adaptation of the offline bound from [23] ("*Towards optimal lower bounds for k-median and k-means coresets*"). Strengths: I think the main contribution is mostly in the theory side. The problem studied is important and well-motivated. It's nice to see that the result of [23] and [24] can be ported to the online setting. This requires quite some non-trivial work and the paper is able to identify an optimal choice of parameters, which results in an asymptotically optimal algorithm. Weaknesses: As mentioned in the Summary, the main idea seems to be an (arguably) straightforward adaptation of [24], which somewhat limits the novelty. Also, I feel the experiment section is kinda redundant or even harmful to the paper's contribution. The proposed algorithm is not acutally implemented (in particular, the RingSample Alg, which is a core part of the proposed algorithm). It seems that the performance gain comes only from discarding points outside the sliding window, especially when compared with the histogram-based algorithm. - If the authors are able to implement their algorithm in a more meaningful way, and if it indeed produces better result on some realworld dataset, then I would be happy to increase my rating. Otherwise, I feel it's better to just remove the experiment section completely and present the paper as a theoretical contribution. Lastly, I believe the writing can be improved a lot. It seems the draft may have been finalized hastily, with instances of noticeable cutting and pasting to meet the page limit. This renders the main body insufficiently self-contained and challenging for readers unfamiliar with [24] (and not dive into the appendix proof). For example, it's hard to get much intuition why RingSample algorithm (Alg. 1) works unless a reader already knows the result of [24]. Technical Quality: 4 excellent Clarity: 2 fair Questions for Authors: Some other comments. - I would recommend the authors to at least give a sketch of [24]'s idea before describing the RingSample algorithm. - I recommend to have "coreset" formally defined somewhere. - The notation $\mathrm{Cost}(X,Y)$ is never formally defined in the paper. Although I can guess what it is, it's better to define it somewhere as it is used frequently. - line 224: Same as above for $\mathrm{Cost}_{|S|\leq k}(X, S)$. - line 227-245: Many symbols defined here are not used at all in the main body. Please move them to the supplementary where they are actually used in the proof. Here they only distracts readers' attention. - line 233-236: $R_I(C_i)$ and $R_O(C_i)$ have the same definition. - line 243: I would write $G_{j, \mathrm{min}}$ and $G^O_b$ as union of some $R_{i,j}$'s rather than introducing a "$x$" there. It feels like you can take some $x$ from a $R_{i,j}$ while in fact it's all-or-nothing. - line 248: Lemma 2.2: what is $G$? - Section 3 (Experiment): My suggestion to the authors is to throw the experiment section to appendix or simply remove it, so you can have more space to enhance the readability of the theory part. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 2 fair Contribution: 3 good Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > As mentioned in the Summary, the main idea seems to be an (arguably) straightforward adaptation of [24], which somewhat limits the novelty. Our main algorithm utilizes (1) a framework for sliding window algorithms using a new randomized online coreset construction along the lines of [8], (2) an online version of the coreset construction of [24], and (3) a consistent assignment across all times of the stream through an online facility location argument in the style of [30]. Although none of these steps are particularly complicated, developing the correct variations for each step and putting them together in a non-trivial manner is fairly novel/technical, e.g., the long line of previous work for clustering on sliding windows has missed this approach. > Also, I feel the experiment section is kinda redundant or even harmful to the paper's contribution. The proposed algorithm is not acutally implemented (in particular, the RingSample Alg, which is a core part of the proposed algorithm). It seems that the performance gain comes only from discarding points outside the sliding window, especially when compared with the histogram-based algorithm. Indeed, the main focus of our paper should be on the theoretical perspective of the algorithm and thus our experiments serve as a simple proof-of-concept illustrating the advantages of sampling-based algorithms over histogram-based algorithms. We will clarify the purpose of our experiments more clearly in the final version of our paper. > If the authors are able to implement their algorithm in a more meaningful way, and if it indeed produces better result on some realworld dataset, then I would be happy to increase my rating. Otherwise, I feel it's better to just remove the experiment section completely and present the paper as a theoretical contribution. We can move the experiments to the appendix in the full version so that the main body focuses on the theoretical contributions of the paper. > Lastly, I believe the writing can be improved a lot. It seems the draft may have been finalized hastily, with instances of noticeable cutting and pasting to meet the page limit. This renders the main body insufficiently self-contained and challenging for readers unfamiliar with [24] (and not dive into the appendix proof). For example, it's hard to get much intuition why RingSample algorithm (Alg. 1) works unless a reader already knows the result of [24]. Thanks for the feedback. Much of the intuition and formalization for the coreset construction of [24], as well as our modifications, is currently in the appendix. If the paper is accepted, we will rework this intuition and incorporate it into the additional content page that is allowed for the main body of the paper. > line 248: Lemma 2.2: what is $G$? We will clarify $G$ to denote any fixed group, so that the first statement of Lemma 2.2 should read "Let $C$ be an $A$-approximate centroid set for any fixed group $G$"
Rebuttal 1: Rebuttal: We thank the reviewers for their thoughtful comments and valuable feedback. We appreciate the positive remarks, such as - The problem studied is important and well-motivated. (Reviewer 8X17) - It's nice to see that the result of [23] and [24] can be ported to the online setting. (Reviewer 8X17) - The paper is able to identify an optimal choice of parameters, which results in an asymptotically optimal algorithm. (Reviewer 8X17) - The paper is well-written in general, most parts are clear and well-organized. (Reviewer P4Gj) - The results are interesting and significant...checking the key steps convince me the result should be correct. (Reviewer P4Gj) - It is a plus for a theory paper to have experiments. The space consumption is reduced a lot with some compromise on the clustering performance. (Reviewer P4Gj) - The presented result significantly improves the space complexity of previous work with $(1+\epsilon)$-approximation guarantee on the clustering quality in sliding window model. (Reviewer YLu7) - In practical settings, since $\Delta$ is usually assumed to be bounded by $\text{poly}(n)$, the presented result can nearly match the lower bound space complexity required for any $(1+\epsilon)$-online coreset construction for $(k,z)$-clustering problem. (Reviewer YLu7) - The extensive experimental results on real world datasets show that the proposed method is more efficient than previous ones. (Reviewer YLu7) - The space $\frac{k}{\min(\varepsilon^4,\varepsilon^{2+z})}$ of $(1+\varepsilon)$-coreset for $(k,z)$-clustering in the sliding window almost matches the lower bound $\Omega\left(\frac{k}{\varepsilon^2}\log n\right)$. (Reviewer o3G5) - Strict proof is provided and experimental results are sufficient. (Reviewer o3G5) - The paper is well-written and well-organized. (Reviewer o3G5) We provide our responses to the initial comments of each reviewer below. We hope our answers resolve any remaining questions, and we would be happy to participate in any conversations during the discussion phase.
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
MixFormerV2: Efficient Fully Transformer Tracking
Accept (poster)
Summary: This paper proposes an efficient pure-Transformer-based tracking framework MixFormerV2. By replacing dense prediction heads and complex updating-score mapping modules with simple four-box tokens, the tracking pipeline is streamlined with high efficiency. Besides, Dense-to-Sparse Distillation and Deep-to-Shallow Distillation further alleviate the computation burden of transformer architecture. Experiments show that this method achieves a good trade-off between tracking accuracy and speed. Strengths: - The motivation that achieving efficient and effective tracking with Transformer is crucial and significant for the balance between performance and speed. - Experimental results show that the proposed method effectively improves the inference speed while keeping excellent performance. - The writing is clear and easy to follow. Weaknesses: - The proposed method is only applied on one baseline tracker MixFormer. The generality is not well proved on other Transformer trackers. - More detailed analysis about the pruned parameters should be provided, e.g., the curve between the performance and prune ratio. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - The training time of different distillation strategies is not provided, which reflects the practicability of the proposed method. - Experiments evidence the effectiveness of teacher supervision with a similar architecture (i.e., MixFormer). I wonder if it works with the supervision of other Transformer trackers (e.g., OSTrack)? Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ***C1: The proposed method is only applied on one baseline tracker MixFormer. The generality is not well proved on other Transformer trackers.*** **R1:** Thanks for your suggestion. To demonstrate the generality in rebuttal, we conduct experiments of **dense-to-sparse distillation** (i.e. the prediction token based head) and **deep-to-shallow distillation** on SimTrack[7]. Besides, we also validate the effect of deep-to-shallow distillation on OSTrack. we find that our proposed dense-to-sparse distillation can not be empolyed to OSTrack since the search tokens are gradually eliminated in some layers, which makes the four learnable prediction token hard to capture the object position information. | Tracker | Head | Init | LaSOT_ext | LaSOT | | :-------------: | :------: | :------: | :-------: | :---: | | Sim-B/16(12) | original | - | 48.8 | 68.6 | | Sim-B/16-T4(12) | our T4 | - | 48.3 | 68.1 | | Sim-B/16-T4(8) | our T4 | our PMDP | 47.9 | 67.5 | | Sim-B/16-T4(8) | our T4 | Tea-fir8 | 46.2 | 66.0 | | Tracker | Head | Init | LaSOT_ext | LaSOT | | :--------------: | :------: | :------: | :-------: | :---: | | OSTrack-256 (12) | original | - | 47.4 | 68.8 | | OSTrack-256 (8) | original | our PMDP | 46.9 | 67.9 | | OSTrack-256 (8) | original | Tea-fir8 | 45.1 | 66.3 | Through the results in the above table (without carefully tuning hyper-parameters), we can derive that the proposed prediction token based architecture and the distillation method is effective on other one-stream transformer trackers. ***C2: More detailed analysis about the pruned parameters should be provided, e.g., the curve between the performance and prune ratio.*** **R2:** Thanks for your useful advice, we provide the curve between the performance and prune ratio in the common response pdf file. It can be observed that the proposed method can consistently improve the performance over baseline. ***C3: The training time of different distillation strategies is not provided.*** **R3:** Thanks for your suggestion. We will add the training time of different models in our revision. The models are trained on 8 Nvidia RTX 8000 GPUs. The dense-to-sparse stage takes about 43 hours. The deep-to-shallow stage1 (12 to 8) takes about 43 hours, stage2 (8 to 4) takes about 35 hours. ***C4: I wonder if it works with the supervision of other Transformer trackers (e.g., OSTrack)?*** **R4:** We guess you means whether **dense-to-sparse** distillation could be supervised by other transformer trackers. For dense-to-sparse distillation, the proposed distribution-based regression is `ought to be supervised with corner prediction head, since it can provide the probability maps for top-left and bottom-right corners`, which has been illustrated in Section 3.2.1. OSTrack is not applicable as it does not have a coerner head . To respond to your concern, we use SimTrack of corner head as the teacher for this analysis. (Note that we did not carefully tune the hyper-parameters due to the limited time.) | Tracker | Teacher | LaSOT | | :-------: | :-------:| :-------: | | MixFormerV2(12) | Sim-B/16 | 67.9 | | MixFormerV2(12) | MixViT-B | 68.9 | It concludes that SimTrack as a teacher can provide a comparable performance to MixViT-B. --- Rebuttal Comment 1.1: Comment: Some concerns have been addressed, so I keep my rating as borderline accept.
Summary: Since main-stream tracking methods are somewhat limited in efficiency, this paper is motivated to develop efficient trackers. The proposed method (named MixFormerV2) is based on the recent tracker MixFormer. The key improvement of architecture is to use a token-based prediction head to replace the corner-based head. Four prediction tokens are used to predict the four coordinates of the bounding box, like the CLS token in ViT. In training, a dense-to-sparse distillation and a deep-to-shallow distillation are employed to further improve trackers’ efficiency. The dense-to-sparse distillation improve the performance of the token-based prediction head, while the deep-to-shallow distillation prune some layers of the backbone. Finally, the proposed MixFormerV2-B and MixformerV2-S achieve good trade-off of performance and speed on GPU and CPU. Strengths: 1) The architecture of the model is simple. The main architecture is a ViT-based backbone with four prediction tokens. 2) The performance and speed is good. Compared with existing high-performance trackers, MixFormerV2-B achieves competitive performance with higher speed. Compared with existing high-speed trackers, MixformerV2-S achieves state-of-the-art performance. 3) Employing distillation is novel for tracking. The proposed dense-to-sparse distillation and deep-to-shallow distillation help the development of efficient tracking methods. Weaknesses: 1) The description of Table 3(e) is not clear. Does Tea-skip4 use PMDP? If not, does its higher performance mean that PMDP is not necessary? If not, why can it achieve similar performance to PMDP? 2) The LaSOT extension benchmark is not included in comparison. The HCAT method is not included in the comparison. 3) The amount of parameters is not reported. [1] Fan H, Bai H, Lin L, Yang F, Chu P, Deng G, Yu S, Huang M, Liu J, Xu Y, et al.. LaSOT: A High-quality Large-scale Single Object Tracking Benchmark. IJCV, 2021 [2] Chen X, Kang B, Wang D, Li D, Lu H. Efficient Visual Tracking via Hierarchical Cross-Attention Tracking via Hierarchical Cross-Attention Transformer. In ECCVW. 2022 Technical Quality: 3 good Clarity: 3 good Questions for Authors: In rebuttal, I hope to see the response to question 1) in weaknesses. Besides, I suggest revising the paper according to 2) and 3) in weaknesses. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The paper does not provide a discussion on limitations. I recommend providing some failure cases, improvement aspects, etc. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ***C1: Does Tea-skip4 use PMDP? If not, does its higher performance mean that PMDP is not necessary? If not, why can it achieve similar performance to PMDP?*** **R1:** *Tea-skip4* is a **special initialization** method, which chooses the skiped four layers (layer-3/6/9/12) of the teacher (MixViT-B) for initialization. In other words, *Tea-skip4* is `an extreme case of ours PMDP when the eliminating epoch m equal to 0`. So it is reasonable that *Tea-skip4* performs better than the baseline *Tea-fir4*, which employs the first four layers of the teacher (MixViT-B) to initialize the student backbone, in the following table. We further evaluate the performance on the more challenging LaSOT_ext dataset. It can be seen that ours PMDP surpasses *Tea-skip4* by 1.0%, which demonstrate its effectiveness. | Method | LaSOT_ext | UAV123 | LaSOT | | :-------: | :-------: | :----: | :----: | | Tea-fir4 | 45.2 | 65.7 | 63.0 | | Tea-skip4 | 46.1 | 66.6 | 64.4 | | PMDP | **47.1** | **67.5** | **64.8** | ***C2: The LaSOT extension benchmark is not included in comparison. The HCAT method is not included in the comparison.*** **R2:** Thanks for your advice, we will add the following LaSOT_ext results and the HCAT comparison in our revision. As in the table, MixFormerV2-B surpasses SwinTrack-B and OSTrack-256 by a large margin with a high running speed. | Method | AUC | Norm P | P | | :-------: | :-------: | :----: | :----: | | MixFormerV2-B | **50.6** | **61.0** | **56.9** | | MixFormerV2-S | 43.6 | 52.7 | 46.2 | | SwinTrack-B | 47.6 | 58.2 | 54.1 | | OSTrack-256 | 47.4 | 57.3 | 53.3 | ***C3: The amount of parameters is not reported.*** **R3:** We will add the amount of parameters in the revised version. ***C4: The paper does not provide a discussion on limitations. I recommend providing some failure cases, improvement aspects, etc.*** **R4:** We have provided the limitation in the supplementary material. In our revision, we will provide the visualization results of the failure cases and some improvement aspects. In the common response pdf file, we have also provided some visualization examples of the good cases and failure cases. --- Rebuttal Comment 1.1: Title: Re: Comment: The authors have carefully addressed my concerns, thus, I update my rating as Strong Accept.
Summary: This paper focuses on designing efficient transformer-based single object tracking algorithm. The basic idea is to replace the original corner-based convolution head with a more lightweight MLP head input w/ only 4 learnable tokens. Moreover, both dense-to-sparse and deep-to-shallow distillations are designed for further improving the efficiency. Experimental results show that the proposed approaches achieve better speed-accuracy trade-off compared to existing transformer-based tracking approaches. Strengths: - The paper is well written and organized, which is easy to understand and follow. - Using the distillation technique to achieve better speed-accuracy trade-off is technically sound. - The proposed approaches are verified on various large-scale tracking benchmarks. - The paper is solid in terms of both illustration and experimental results, which brings something new about how to perform distillation for speeding up existing transformer-based trackers. Weaknesses: - What’s the effect of w/o applying the progressive strategy for depth pruning? - It seems that the fast running speed mainly comes from the decrease of layers used in the transformer backbone as illustrated in Table 2. What’s the tracking performance and speed can be achieved if we only apply progressive model depth pruning and a corner prediction head (w/o using dense-to-sparse distillation)? - Using the distillation technique to speed up deep models is not a very novel idea, since many attempts are working on distillation for speed up. One good thing is that the authors made some specifical design in VOT like a progressive pruning strategy in PMDP. - The speed evaluation is somewhat questionable. In Tables 5-6, is the FPS comparison fair? is the speed (FPS) evaluated on the same platform? What is the GPU platform used for GPU speed and GFLOPs evaluation? Please detail these information in the main paper. Technical Quality: 3 good Clarity: 3 good Questions for Authors: No. See Weaknesses. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: See Weaknesses. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ***C1: What’s the effect of w/o applying the progressive strategy for depth pruning?*** **R1:** As shown in the table, the baseline is the **most usual initialization** method, which employs the first four layers of the teacher (MixViT-B) to initialize the student backbone, denoted as *Tea-fir4*. On the challenging LaSOT_ext dataset, ours PMDP strategy improves it by 1.9% of AUC. Besides, we also explore an **special initialization** method, named *Tea-skip4*, which chooses the skiped four layers (layer-3/6/9/12) of the teacher (MixViT-B) for initialization. In fact, *Tea-skip4* is an extreme case of ours PMDP when the eliminating epoch $m$ equal to 0. It can be seen that ours PMDP also surpasses *Tea-skip4* by 1.0%, which demonstrate its effectiveness. | Method | LaSOT_ext | UAV123 | LaSOT | | :-------: | :-------: | :----: | :----: | | Tea-fir4 | 45.2 | 65.7 | 63.0 | | Tea-skip4 | 46.1 | 66.6 | 64.4 | | PMDP | **47.1** | **67.5** | **64.8** | ***C2: It seems that the fast running speed mainly comes from the decrease of layers used in the transformer backbone as illustrated in Table 2. What’s the tracking performance and speed can be achieved if we only apply progressive model depth pruning and a corner prediction head (w/o using dense-to-sparse distillation)?*** **R2:** As illustrated in Table 1, the proposed token-based head also improves the baseline by 84% of GPU running speed when using score prediction head for temporal-spatial tracking. The tracking performance and speed of using PMDP and a plain corner head (not pyramidal corner head) is showcased in the following table. It can be observed that MixFormerV2-B obtains comparable performance with higher running speed (+45 FPS). | Backbone | Head | LaSOT | UAV123 | FPS(GPU) | | :---------: | :---: | :---: | :----: | :-------: | | MixViT-8 | Plain Corner | 69.4 | 70.0 | 120 | | MixFormerV2-B | T4 | 69.5 | 70.5 | 165 | Excepting for the difference between corner head and the MLP head of MixFormerV2, our T4 design is more flexible. It allows us to easily predict the target confidence score with a simple MLP head, which is quite efficient compared to the original score prediction decoder (i.e., SPM) in MixFormer and is more easily deployed on different platforms (since the Precise RoI Pooling in SPM is not supported on CPU). ***C3: Using the distillation technique to speed up deep models is not a very novel idea.*** **R3:** As you pointed, we have made some specifical design in VOT like the progressive pruning strategy and demonstrated its effectiveness. We think the exploration for distillation in VOT can bring some insprition to the tracking community. ***C4: In Tables 5-6, is the speed (FPS) evaluated on the same platform? What is the GPU platform used for GPU speed and GFLOPs evaluation?*** **R4:** Thanks for your suggestions, and we will revise that in the next version of the manuscript. In Tables 5-6, the speed is evaluated on different platforms since we directly use the reported FPS in their original papers. In the table, we present the FPS of some transformer-based trackers on the same platform. | Method | FPS(GPU) | | :---------: | :---: | | STARK-50 | 45 | | OSTrack-256 | 95 | | MixFormerV2-B | 165 | We use a NVidia Quadro RTX 8000 GPU for evaluation. --- Rebuttal Comment 1.1: Comment: I appreciate that the authors address most of my concerns in the rebuttal. Overall, I think this is a solid paper worth accepting for VOT, and I hope that the authors could also release their codes for reproduction, since there are still some training/inference hyper-parameters not included in the main paper. I will keep my previous rating.
Summary: This paper proposes a fully transformer tracking framework without any dense convolutional operation and complex modules. The key contribution is the design of different input tokens and the distillation-based model reduction paradigm. Mixed attentions are performed between prediction tokens and the image generated tokens to capture abundant correlation. The distillation methods, dense-to-sparse distillation and deep-to-shallow distillation, improve the efficiency of the proposed MixFormerV2. Evaluations carried out on major tracking benchmarks show promising results. Strengths: - The proposed fully transformer tracking framework is interesting and makes sense. - The proposed method achieves favorable performance in both terms of accuracy and run speed. Weaknesses: Some important details are missing. - The details of some components are missing, such as the details of the Score Head. - In the first ablation experiment, it shows that the run speed of the Py-Corner version is slower than that of T4. The calculation loads of T4 (under the VIT framework) is not light. It is suggested to provide a detailed calculation comparison between T4 and Py-Corner. - Based on equation (8) and (9), some layers in the student network are gradually eliminated and finally turn into identity transformation. How to decide which layer to be eliminated? How to determine the epoch m in the training process. Are these eliminated parameters removed or just not involved in the process flow of the network. - It is suggested to provide intermediate results, such as heatmaps, to better show how the proposed algorithm works. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Please address the issues in the weakness section. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors do not discuss the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ***C1: The details of some components are missing, such as the details of the Score Head.*** **R1:** We will add the detailed structure of the heads in our revision. The Score Head is a simple MLP composed of `two linear layers with the hidden dimension of 768.` Specifically, firstly we average these four prediction tokens to gather the target information, and then feed the token into the MLP-based Score Head to directly predict the confidence score $s$ which is a real number. Formally, we can represent it as: $$ s = \mathrm{MLP}(\mathrm{mean}(\mathrm{token}_X)), X \in \{\mathcal{T}, \mathcal{L}, \mathcal{B}, \mathcal{R}\}. $$ ***C2: The calculation loads of T4 (under the VIT framework) is not light. It is suggested to provide a detailed calculation comparison between T4 and Py-Corner.*** **R2:** Thanks for your suggestion, we will add the detailed comparison between Py-Corner head and T4 head in terms of FLOPs in our revised manuscrip. Since `the four prediction tokens can be concatenated to the search tokens as a whole for mixed attention in our implementation, the computational latency can be omitted`, which means only the four MLP heads are ought to calculate loads (FLOPs). Based on that, we showcase the FLOPs as follows. Formally, we denote $C_{in}$ as input feature dimension, $C_{out}$ as output feature dimension, $H_{in}, W_{in}$ as input feature map shape of convolution layer, $H_{out}, W_{out}$ as output feature map shape, and $K$ as the convolution kernel size. The computational complexity of one linear layer is $O(C_{in}C_{out})$, and that of one convolutional layer is $O(C_{in}C_{out}H_{out}W_{out}K^2)$. In our situation, for **T4**, the Localization Head contains four MLP to predict four coordinates. Each MLP contains two linear layer, whose input and output dimensions are all 768. The loads can be calculated as: $$Load_{T4} = 4 \times (768 \times 768 + 768 \times 72)= 2580480 \sim 2.5 M$$ For **Py-Corner**, totally 24 convolution layers are used. The loads can be calculated as: $$ \begin{aligned} Load_{Py-Corner} = 2 * (&768 * 384 * 18 * 18 * 3 * 3 + \\\\ &384 * 192 * 18 * 18 * 3 * 3 + \\\\ &384 * 192 * 18 * 18 * 3 * 3 + \\\\ &192 * 96 * 36 * 36 * 3 * 3 + \\\\ &384 * 96 * 18 * 18 * 3 * 3 + \\\\ &96 * 48 * 72 * 72 * 3 * 3 + \\\\ &48 * 1 * 72 * 72 * 3 * 3 + \\\\ &192 * 96 * 18 * 18 * 3 * 3 + \\\\ &96 * 48 * 18 * 18 * 3 * 3 + \\\\ &48 * 1 * 18 * 18 * 3 * 3 + \\\\ &96 * 48 * 36 * 36 * 3 * 3 + \\\\ &48 * 1 * 36 * 36 * 3 * 3) \\\\ = & 3902587776 \sim 3.9 B \end{aligned}$$ For simplicity, we do not include some operations such as bias term and Layer/Batch-Normalization, which does not affect the overall calculation load level. Besides, the Pyramid Corner Head utilize additional ten interpolation operations. Obviously the calculation load of Py-Corner is still hundreds of times of T4. In fact, `the FLOPs is not equal to running latency due to the parallel computing`, so we directly provide comparison of their running time in the manuscript. Another advantage of using T4 is the flexibility to allow us to easily predict the target confidence score with a simple MLP head, which is quite efficient compared to the original score prediction decoder (i.e., SPM) in MixFormer and is more easily deployed on different platforms (since the Precise RoI Pooling in SPM is not supported on CPU). ***C3: How to decide which layer to be eliminated? How to determine the epoch m in the training process. Are these eliminated parameters removed or just not involved in the process flow of the network?*** **R3:** To embody multi-level representations and also reduce the difficulty of feature mimicking during elimination, we choose to eliminate layers uniformly. We have also experimented with eliminating all layers in high level or all layers in low level, but it turned out to perform worse than the employed choice. As shown in the table, we find that when the epoch $m$ greater than 40, the choice of $m$ seems hardly affect the performance. So we determine the epoch $m$ to 40. | Arch | Online | Epoch $m$ | AUC | | :-----------: | :----: | :-------: | :--: | | MixFormerV2-B | no | 30 | 68.3 | | MixFormerV2-B | no | 40 | 68.5 | | MixFormerV2-B | no | 50 | 68.5 | These eliminated parameters can be removed directly when the drop rate decreased to zero in both training and inference processes. ***C4: It is suggested to provide intermediate results, such as heatmaps, to better show how the proposed algorithm works.*** **R4:** Thanks for your advice, we will add the intermediate visualization results in our revised manuscript. We have provided the visualization results in the common response pdf. MixFormerV2 predict the probability distribution of bounding box coordinates, so we plot the output probability distribution as show in Fig. 1. We also visualize the attention heatmaps of four prediction tokens in the last layer of the backbone. It can be observe that the four prediction tokens focus on the four boundaries of the target object, and the coordinates probability distribution highly consists with the corresponding attention heatmaps. As shown in Fig. 1(c)(d), there still exist some cases the model is not able to handle perfertly, such as occlusion and similar objects which may cause distribution shift. ***C5: The authors do not discuss the limitations.*** **R5:** We have discussed the limitations in the supplementary material, please check it. --- Rebuttal Comment 1.1: Title: Rebuttal by Reviewer Comment: Thank the authors for the compliment and detailed response. Thus, I suggest accepting this paper. --- Reply to Comment 1.1.1: Comment: Thanks a lot for your recognition of our work.
Rebuttal 1: Rebuttal: We thank all reviewers' efforts in reviewing our paper and giving insightful comments and valuable suggestions. We have provided the visualization of intermediate results and curve between performance and pruned depth in the PDF file. Pdf: /pdf/c62db2e9f9da2ee17b0de8ec9ad005031c6930c7.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
The Grand Illusion: The Myth of Software Portability and Implications for ML Progress.
Accept (poster)
Summary: The paper explores the combination between hardware and software for machine learning, and ask the question: How portable are popular ML software frameworks? The authors conduct a quantity study of the portability of mainstream ML frameworks across different hardware types. With the experiment result, they claim that machine-learning frameworks can lose more than 40% of their key functions when ported to other hardware, and significantly slowdown if portable. Collectively, the results reveal how costly straying from a narrow set of hardware-software combinations can be and suggest that specialization of hardware impedes innovation in machine learning research. Strengths: 1. The paper raise a concern for lack of machine learning portable quantity, which is not payed enough attention to in the current research. 2. This paper reports large-scale experimental results to quantity the hardware-software combination, which can be reference for the industry developers. Weaknesses: 1. The lack of portability with different hardware/software structures is widely known knowledge. For example, APPs work on Arm/Android systems is difficult to work on the X86/Windows system. Under mature commercial circumstances, the paper lacks proving why portable is important and necessary in ML. 2. The paper concludes, “Specialization of hardware impedes innovation in machine learning research”, which is not been proved in this paper. According to the No Free Lunch theory in computer architecture studies, domain-specific hardware needs to find a tradeoff between generality and efficiency on specific workloads. It is not realistic to design hardware supporting every workload as well as other hardware frameworks, while still keeping its own advantage in some domains. 3. The paper needs to focus more on the quantity standard for ML portable. The method now only considers the slowdowns of the software framework APIs, but not the actual tasks or NNs (such as Resnet or VGG). The standard is confusing and the authors need more analysis or evidence. 4. The paper is not easy to follow. It is not clear to understand the significance and originality of the contribution. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: 1. Paper significance: How the lack of portability of software frameworks is significant in ML? It is not easy to understand since there are no realistic evidence. What does the quantity result suggest comparing to the intuitive result? We would like to know more about why quantity is important here. 2. Methods: Why the paper uses the framework operations instead of NN workloads for evaluation? The codes need to be manually check to port on different hardware. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 2 fair Contribution: 1 poor Limitations: The authors have not discussed their limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: # Rebuttal (fKz5): We thank the reviewer for their feedback, including their positive observation that this work "raise[d] a concern for lack of machine learning portable quantity, which is not payed enough attention to in the current research" and the positive view of the experimental rigor and breadth of our work. These experimental results were the core of our research. We believe we are among the first to quantify the lack of portability between machine learning frameworks. Moreover, we will release the dataset to the ML community. From reviewer fKz5, we are rebutting the following claims: 1. **"The lack of portability with different hardware/software structures is widely known knowledge."**: While knowledge of hardware disparities may be known by some, we believe the magnitude of the findings presented in this paper are not known. We believe our findings are striking: PyTorch and TensorFlow have portability issues across GPUs and TPUs, with performance discrepancies over 10x slowdown when moved from GPU to TPU for PyTorch; meanwhile, JAX functions, designed with TPUs in mind, run 91.8% faster on TPUs. Our contribution is to offer a rigorous evaluation and benchmark this issue, which is in keeping with the goals of the Evaluation track. 2. **"Under mature commercial circumstances, the paper lacks proving why portable is important and necessary in ML."**: Our primary focus is on the portability of ML frameworks for researchers, as stated in Lines 29-30. We are interested in how a lack of portability can increase the barrier of entry for new ideas. A lack of software portability means that researchers are locked into certain hardware/software combinations. This can bias against certain ideas and advantage others. This has been widely discussed in research, including Hooker et al. (2021) and Barham et al. (2019). The effect on developers and researchers of significant differences in hardware latency is that ideas overfit certain tooling stacks and fail to transfer to other software stacks. This can turn the act of writing machine learning code into an arcane art of understanding framework sharp edges and hardware complexity. 3. **"The method now only considers the slowdowns of the software framework APIs, but not the actual tasks or NNs (such as Resnet or VGG)"**: Our objective was to develop an analysis that would broadly cover a wide variety of workloads. A major concern when overly focusing on popular architectures or tasks is that we might sideline the diverse range of code and ideas that researchers are exploring, some of which might not have reached popularity or high optimization levels. To address our concerns about workload representation, we sampled our functions from a distribution based on function frequency. This was designed not to underestimate workloads, thus ensuring a balanced view of the performance landscape across the range of ideas researchers may want to try. Analyzing workloads instead of individual functions also poses additional challenges in terms of the rigor of our analysis. We will add the following reasoning to the final manuscript to make more clear: 1. Sometimes, it is not possible to analyze. For example, if we have input x, go through three functions F, G, and H in the order F(G(H(x))). If the middle function, which is G in this case, fails because it is not portable, we will not be able to test the function F. 2. Different workloads also use different framework versions. Therefore, if we use a deprecated function, we might face (1). 3. It is biased. The function X might work on the common task, but it might not work in the niche case. Therefore, the test case itself is much more thorough and thus more suitable for testing. 4. Operations are the building blocks of ML workloads. The performance and portability of the operations directly impact the workloads that use them. 4. **"Domain-specific hardware needs to find a tradeoff between generality and efficiency on specific workloads. It is not realistic to design hardware supporting every workload as well as other hardware frameworks, while still keeping its own advantage in some domains"**: While it does not make sense to implement ML operations for every feasible piece of hardware, it is not an unrealistic observation to point out that the current state of ML frameworks can increase friction in the adoption and exploration of ideas across hardware because of the severe drops in performance. 5. **"How the lack of portability of software frameworks is significant in ML? It is not easy to understand since there are no realistic evidence. What does the quantity result suggest comparing to the intuitive result? We would like to know more about why quantity is important here."**: We motivate our work by citing prior works (Hooker et al. (2021) and P. Barham et al. (2019)), which make a strong case for why the lack of portability is significant. As for the quantities themselves, we believe quantifying latency differences over multiple runs tells a compelling story about the portability differences between frameworks. 6. **"The authors have not discussed their limitations."**: We thank the reviewer for pointing this out! Our research has three limitations. Firstly, we used only one type of GPU and TPU. During the rebuttal, we added an additional variant, which we believe mitigates some of the limitations of the original submission. Secondly, our current focus is on portability for researchers, so a future direction would be to focus on commercial/application-based portability, which would include analysis involving CPUs. Lastly, we couldn't discern the underlying reasons for the observed performance mitigation when transferring functions across devices since we don't have access to the source code (CUDA is not open source). Exploring this aspect would be valuable for the ML community. We will add these limitations in our final manuscript. --- Rebuttal Comment 1.1: Comment: Now that discussion is underway, we wanted to ask fKz5 if there are any follow-up points you would like further clarity on. During the rebuttal period, we carried out additional experiments and are here to ensure that we have addressed your questions or doubts regarding the limitation of our methodology, paper significance, the tradeoff between generality and efficiency, portability under commercial circumstances, and the quantity standard for ML portability. If everything is clear and the recent experiments have addressed your concerns, we kindly ask fKz5 to consider increasing their score to reflect these changes. --- Rebuttal Comment 1.2: Title: Thanks for the response! Comment: Sorry for my late response. I thank the authors for their detailed rebuttal. My major concern remains in the novelty and significance of this work. 1. The fact “lack of portability between machine learning frameworks” is not a novel claim. In this paper, the experiment gives more quantity evidence to the fact, and it does not contribute much to the research community. The key question to the authors should be "Why the portability gap should be evaluated in quantity since it is a known knowledge?" The experiment results should be further analyzed. For example, researchers only know the lack of portability exists, but "Why" and "How" the lack of portability occurs should be studied. 2. This paper claims “how a lack of portability (between hardware) can increase the barrier of entry for new ideas”, but it is not been proved in the paper. How well functions (and more broadly models) map from one framework to another seems to be more important to researchers with new ideas. For example, the quantity result cannot prove what specific kind of new ideas are constrained when researchers are locked into certain hardware/software combinations. The paper cited some discussions in the paper, but the quantity results in this paper do not contribute much to the claim over the related discussions. 3. The fact "TPUs and GPUs support some software APIs better than others" is not a novel claim. This paper quantifies the gap between the APIs, but it is not clear what to prove. According to Amdahl’s Law, partial results (such as APIs) do not tell the exact story of real-world programs. The quantity results do not yet support the claim that there are severe drops in performance for researchers and thus increase the barrier of entry for new ideas. 4. Overall, the paper contributes to quantifying the lack of hardware portability, but my major concern is why the quantity results are significant to the research community. --- Reply to Comment 1.2.1: Comment: Thank you to **fKz5** for the response and the additional detail about what would convince them about the paper’s significance and novelty. In our response, we’d like to specifically address several of the issues raised: **(i)** whether it constrains innovation, **(ii)** whether this phenomenon is well-known, and **(iii)** whether quantifying this effect (rather than just noting its existence) is important for broader decisions. **(i) Does a lack of portability constrain innovation?** The reviewer is concerned that our paper does not show evidence of the constraining of innovation. While not explicit, we would argue that we are doing this implicitly, and we provide some examples here to justify why these implicit limitations would be consequential. While some innovation happens de-novo, building something new from scratch, much happens from local adaptation, where an existing innovation is adapted (Eisenhardt and Behnam N. Tabrizi 1995). Such practice is, of course, very common in ML where there is extensive reuse of code, models, etc. Our paper directly implies a constraining of innovation because it means that someone who has previously developed their work in a framework, like TensorFlow, tied to a particular piece of hardware may be unable to switch to another advantageous framework if that other framework lacks functionality/performance needed or will overfit their ideas to their tooling stack they have. While it is hard to directly count instances of non-invention because “didn’t invent” also means “didn’t publish,” we can nevertheless see particular examples where the lack of software portability has stifled innovation: * **Early exiting (Abadi et al. 2016, Teerapittayanon et al. 2017)** is a very popular efficiency strategy for avoiding unnecessary computation. But early exiting has no impact on memory requirements or efficiency when using software stacks that fully instantiate the computation graph prior to running the program (i.e., TensorFlow). Thus this is an optimization that works well in other frameworks but gains us nothing in the case of TensorFlow. * **Naive multi-device training distribution strategies** are sensitive to the choice of software stack used. It can have a pronounced impact on differences in dispatch overhead and communication patterns with Pytorch not being able to engage in some distributed workloads (Barham et al., 2022). * **Capsule networks (Sabour et al. 2017)** have unique operations like squashing and routing that stray from the matrix multiplies. Capsule networks are far less efficient in TensorFlow, given the requirement for adaptive routing. * **Adaptive learning or data pruning**. Both require removing examples from a batch that are estimated not to be important (adaptive pruning does it over the course of training, and data pruning can be a single shot before training). Both techniques have no impact on efficiency when using software stacks that require fixed shapes (i.e., TensorFlow), as instead of changing the batch size on the fly, you need to pad the batch with zeros.  * **Proximal gradient optimization and variants (Parikh and Boyd 2014)**. Implementing these techniques in PyTorch is straightforward due to Pytorch’s flexible design granting granular control over the training loop. Conversely, Keras abstracts much of the underlying intricacies, which can limit the direct control users have over specific training loop customizations. All these examples are promising and important research directions that are impacted by the lack of portability in tooling. In the next response to **fKz5**, we will respond to the remaining two concerns of **(ii)** whether this phenomenon is well-known, and **(iii)** whether quantifying this effect (rather than just noting its existence) is important for broader decisions.
Summary: This paper studies the portability of three different ML frameworks (PyTorch, TensorFlow, and JAX) across different hardware types (GPUs and TPUs). The authors sample a variety of functions implemented in these frameworks and test to what extent these functions are interoperable across platforms, and also compare the function latency between the hardware types. The evaluation finds that many functions are either unimplemented, fail to execute, or execute far more slowly on one hardware type versus the other. For example, the authors report that JAX functions are almost always faster on TPUs as opposed to GPUs. Strengths: - In general, portability is a key practical concern and therefore the paper provides a valuable contribution for the ML community. - Methodology for selecting functions for evaluation is comprehensive and the benchmarking is carefully executed. - Thorough evaluation of GPU vs TPU execution reveals many interesting insights about function performance across hardware types. - Paper includes granular analysis of failure cases. Weaknesses: - While portability is a very relevant and practical concern for ML practitioners, this paper only explores portability in the dimension of GPU vs TPU execution within the same framework. Another related dimension is how well functions (and more broadly models) map from one framework to another; for example, if I have a convolution kernel written in PyTorch executing on a GPU, how comparable is this kernel’s execution to a similar convolution operation in TensorFlow? While frameworks may share underlying cuDNN kernel implementations under the hood, it would be interesting to see to what extent this symmetry exists (it appears that some basic version of this analysis could already be performed using the data in Tables 4, 5, and 6). An extension of this dimension is how well frameworks operate with intermediate representation (IR) libraries like ONNX and OpenXLA. In particular, a helpful experiment might be taking a list of functions in PyTorch, exporting them to an IR and then exporting that IR to TensorFlow, and seeing what percentage of functions fail to execute, produce different outputs, and/or experience slowdowns. - The paper only evaluates two specific hardware types: T4 GPUs and v3-8 TPUs. While the evaluation revealed significant disparities even between these platforms, the analysis would be even more compelling if expanded to include additional GPU / TPU types or new hardware types (e.g. AMD and/or Cerebras machines). Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: - Why are certain functions so much slower on TPUs than GPUs and vice versa? This question may be difficult to answer given some kernel implementations are black box, but for kernels where source code is available it would be helpful to understand the performance behavior in more detail. For example, do certain kernels more effectively use hardware caches on GPUs vs TPUs? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: The paper does not discuss limitations in much detail, but does discuss broader impacts. I do not foresee any negative societal impacts of this work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: # Rebuttal (Twor) We thank reviewer Twor for their feedback and for noting the core strengths of our work, including "portability is a key practical concern and therefore the paper provides a valuable contribution for the ML community" and "evaluation of GPU vs TPU execution reveals many interesting insights about function performance across hardware types." We appreciate them mentioning the importance of portability. We believe portability is of extreme importance, and quantifying the lack of portability is a valuable contribution. We also appreciate review 2 highlighting the comprehensiveness of our methodology and benchmarking: "Methodology for selecting functions for evaluation is comprehensive and the benchmarking is carefully executed." Specifically, the reviewer mentioned our comparison of operations on GPU vs TPU and the granular categorization of failure modes. From reviewer Twor, we are addressing the following claims: 1. **"While portability is a very relevant and practical concern for ML practitioners, this paper only explores portability in the dimension of GPU vs TPU execution within the same framework. Another related dimension is how well functions (and more broadly models) map from one framework to another; for example, if I have a convolution kernel written in PyTorch executing on a GPU, how comparable is this kernel's execution to a similar convolution operation in TensorFlow?"**: We choose to focus our effort on hardware portability instead of framework portability because we believe it is the larger, more relevant problem. Software, and specifically frameworks, have no physical limitations on usage. While it may take the developer time to switch from framework to framework, this is a question of bits. Hardware is in the physical world. And as such, you are often limited by the devices you own or the amount of budget you have to access. While it would be fascinating to dive into how frameworks map to the underlying operations in CUDA and other compilation targets, we believe the first step was our process to find latencies and failure rates within a framework. Between the experimental cost of time and the compelling narrative these results gave, we believe this was a sufficient step to release this paper. We believe this is a promising direction for future research and will add it to our limitations and future work section in the manuscript. 2. **"An extension of this dimension is how well frameworks operate with intermediate representation (IR) libraries like ONNX and OpenXLA. In particular, a helpful experiment might be taking a list of functions in PyTorch, exporting them to an IR and then exporting that IR to TensorFlow, and seeing what percentage of functions fail to execute, produce different outputs, and/or experience slowdowns."**: While we agree exploring compilation targets and intermediate representations is a useful direction for this research, we believe it is a rich direction of future work. Here, our primary goal is to provide a primary rigorous contribution to evaluate the portability as a preliminary step to motivate such future work. Our experimental setup is already costly, requiring hand coding verifying, and running in a total of 1,146 experiments. 3. **"The paper only evaluates two specific hardware types: T4 GPUs and v3-8 TPUs. While the evaluation revealed significant disparities even between these platforms, the analysis would be even more compelling if expanded to include additional GPU / TPU types or new hardware types (e.g. AMD and/or Cerebras machines)."**: Agreed that more device types make for a stronger narrative when it comes to our results. We have taken the rebuttal period to incorporate the feedback of the reviewer and run additional experiments. In the attached pdf, we include a table showing failure rates for both the A100s and the T4s. Please see Table 1 in the attached file for the evaluation of A100s. 4. **"Why are certain functions so much slower on TPUs than GPUs and vice versa?"**: We thank the reviewer for the interesting question. To deeply understand why specific functions perform differently on TPUs and GPUs, we would need a granular profile of the operations down to the hardware level. However, in some cases, we do not have access due to the closed-source nature of CUDA and XLA code. Given that and the time cost of doing our current experiments, we believe that understanding why certain latency differences exist is beyond the scope of this paper. That said, even if the exact reasons for these lags are not fully investigated due to proprietary software barriers, recognizing their existence is crucial for the ML community. --- Rebuttal Comment 1.1: Comment: Thank you for the thoughtful response to the review and adding the additional A100 GPU results. I think the justification provided for the focus on hardware portability makes sense, though I would still be interested to see follow-up work in a subsequent paper on framework portability, as well. I do still think that some discussion about why certain functions are slower on each hardware type would strengthen the paper; even if there is no scope for running specific experiments for this paper, I would like to see some insight into A) possible explanations for this behavior and B) a high-level methodology for investigating the behavior in more detail (a single paragraph would suffice in my opinion). Overall, given the additional experiments added and the rebuttal text I have improved my score to a 6. --- Reply to Comment 1.1.1: Comment: We thank Twor for updating their score positively to reflect the additional experiments added and our clarifications in the rebuttal text. We welcome the feedback on providing additional observations to the reader in the discussion section as to why certain functions are slower on different hardware types. We will reply shortly before the end of this discussion phase with some additional context on A) and B) that we would be happy to include in the discussion and future works section of the final manuscript. We thank Twor again and are happy to engage further during the discussion period if any other questions come up.
Summary: The authors study the portability of a core set of PyTorch, TensorFlow, and JAX functions across hardware platforms, specifically GPUs and TPUs. They find significant fractions of functions fail to run on a given platform fully, partially, or within a tolerable latency. They provide the generated dataset. Strengths: - Timely analysis of portability of widely used ML software frameworks across different hardware devices - Significant and surprising results that can go a long way toward shining a light on the issue and improving the current situation - Well-written and clear exposition - The public dataset release is highly beneficial for the community. Weaknesses: - For reproducibility, it would be useful to detail the exact software versions used for the benchmarking explicitly - No CPU comparisons are made, which would be quite interesting to add given that many ML applications are run on CPUs (both x86 and ARM) - In related work, there could be some discussion of the emerging ideas around FAIR for ML (https://www.rd-alliance.org/groups/fair-machine-learning-fair4ml-ig) and FAIR for research software (https://www.rd-alliance.org/groups/fair-research-software-fair4rs-wg) Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: - What versions of the software were used? - Is it possible to add comparisons to CPU (x86 or ARM)? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: Yes, the authors have addressed limitations and broader societal impacts. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: # Rebuttal (FGUA) We thank the reviewer for their review and constructive feedback. We appreciate your recognition of the timely importance of our analysis of the portability of ML software frameworks across distinct hardware devices. It is great that you found our paper provides “Significant and surprising results that can go a long way toward shining a light on the issue and improving the current situation.” These results, as you pointed out, expose the existing challenges in the domain. Your acknowledgment of our “Well-written and clear exposition” and “The public dataset release is highly beneficial for the community” motivates us to continue this path. From reviewer FGUA, we are rebutting the following claims: 1. **"What versions of the software were used"**: We clarify that the Framework’s version: TensorFlow version 2.11.0, PyTorch version 1.12.0, and (3) JAX version 0.4.8. We will add these details to our paper, and thank the reviewer for pointing out this inadvertent omission. 2. **"Is it possible to add comparisons to CPU (x86 or ARM)?"**: Unfortunately, we were unable to complete these during the time allowed for the rebuttal – we prioritized the additional hardware variants on GPU since it takes time to run all functions and variants. However, we are happy to commit to adding these experiments to the final manuscript and agree that this is an important variant. 3. **"In related work, there could be some discussion of the emerging ideas around FAIR for ML (https://www.rd-alliance.org/groups/fair-machine-learning-fair4ml-ig) and FAIR for research software (https://www.rd-alliance.org/groups/fair-research-software-fair4rs-wg)"**: We thank the reviewer for the suggestions for a wider body of work that would be good to highlight. The two suggestions FAIR for ML (https://www.rd-alliance.org/groups/fair-machine-learning-fair4ml-ig) and FAIR for research software (https://www.rd-alliance.org/groups/fair-research-software-fair4rs-wg) relate to open science initiatives and multi-institutional collaborations to improve findability, accessibility, interoperability, and reuse of research software. These are important motivators of our current work. We will add these to the final manuscript as adjacent directions to understand the role of software in ensuring fairness. --- Rebuttal Comment 1.1: Comment: Thank you for the response. While I feel a similar study for CPUs would be valuable, I acknowledge that given time and space constraints, that may be for a different paper. Overall, I maintain my original score. --- Reply to Comment 1.1.1: Comment: We thank **FGUA** for the time and effort put into your review. We appreciate the confidence in the merits of our work and thank you for the feedback during the rebuttal period.
null
null
Rebuttal 1: Rebuttal: # Global Response We thank all reviewers for taking the time to evaluate our manuscript. We appreciate the recognition of the timeliness and relevance of our study (R fKz5 and FGUA), emphasizing the often overlooked issue of ML software framework portability across different hardware types (R fKz5). It is encouraging to see the consensus on the value of our large-scale experimental results, which can serve as a reference for both academia and industry (R fKz5, Twor, FGUA). The thorough and careful evaluation methodologies employed, particularly our comprehensive approach to function selection and benchmarking (R Twor), as well as our granular analysis of failure cases (R Twor), are recognized positively. Furthermore, we are grateful for acknowledging our contribution in revealing significant insights about function performance across GPUs and TPUs (R FGUA, Twor). As reviewer (Twor) has suggested we have included a pdf with results for another type of GPUs. We now include both T4 and A100 GPU results. We believe this makes our research even stronger and we hope to extend to more types of devices in the future. Pdf: /pdf/224374fea8671d5f2261aa01187275eb2af75e3f.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Learning Shared Safety Constraints from Multi-task Demonstrations
Accept (poster)
Summary: The authors propose a novel method for constraint learning from expert demonstrations by an optimal CRL policy. The idea behind the algorithm is to frame the problem as a zero-sum game between a policy player and a constraint player, using inspiration from a two-player zero-sum game expression of IRL and CRL. With the reward function available, expert demonstrations, and a class of potential constraints containing the ground truth constraint, the authors propose to minimize the cost to go subject to being at least as safe as the expert, i.e. to obtain a larger cost than the expert policy on all constraints in the set. The authors derive no regret convergence bounds for the learned constraints and deal with degeneracy issues by extending the problem to the multi-task framework with asymptotic guarantees. They present simulation results on high dimensional robotics tasks. Strengths: Very sound paper, and very clean and ingenuous framing of the problem as a game. I really enjoyed reading the paper as the work elegantly composes concepts from IRL, CRL, GT and no regret learning. I also see it as being of big significance for a very important problem, that of learning constraints from demonstrations. Overall very solid work. Weaknesses: - Although the algorithmic contribution is very strong, it seems that the practical algorithm section of the paper is much weaker and feels rushed. The single task results seem very simplistic while the multi-task maze example is way too briefly discussed. There is one sentence describing the threshold after which correct recovery of the constraints occurs, an aspect that would deserve a more extensive presentation and analysis. - Along that line, figure 5 deserve to be reworked for clarity, it mixes too many things in together with little readability. - There is no discussion on the computation costs of running the algorithm and on its data requirements. The authors only mention succinctly that CRL is slower than RL (by how much?), but it would be useful to know how many demonstrations one needs for typical robotics scenarios and how much compute the recovery of constraints needs. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: - How do the authors expect their solution to scale for systems where the state is much higher in dimension than the simple examples presented. For example do they think that their method could be a direction to learn constraints from sequences of images in applications such as autonomous driving or flight? - (Optional) out of curiosity, I am wondering if the authors have any insight on how much more challenging it would be to consider constraints in multi-agent environments? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: - The authors mention the lack of real world experiments as a limitation, I could not agree more but would go even further in suggesting that the proposed simulations are insufficient to help readers fully grasp the potential of the method. The paper would warrant a higher rating should that aspect be improved. - I am awaiting the authors response to better understand whether the method would actually be practical to use for real life applications and robotics problems of interest. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their kind words about our work. Responding in order to the concerns raised: W1: Because of the overly conservative constraints single-task ICL produces on some problems (e.g. a maze), our main goal with the single-task experiments was to show that this is not always the case, if one is able to restrict the class of constraints appropriately. Specifically, on both of the high-dimensional continuous control problems we consider, we are able to recover the ground-truth constraint. In response to your concern about the multi-task experiments, as we mention in the global response, we performed experiments on a more complex problem: recovering the walls of the ant-maze (rather than the point maze). W2: We agree and would be more than happy to split up Figure 5 into multiple plots, given an additional page of space. W3: While our sample complexity analysis does provide an answer to the question of how many tasks we might need to see to recover a good constraint, we agree that we could add more details about how many demonstrations we used for our practical experiments. For all experiments, we used 20 demos from the expert per task. Once we’ve optimized the policy, learning a constraint is just a classification problem and is therefore of limited computational expense. The computational complexity of constrained RL vs. regular RL depends a lot on how related the constraint and the reward function are to each other (as if they are highly correlated, we are likely to need more Lagrange multiplier updates / policy optimization steps). It is therefore difficult for us to make a global statement. Q1: We would expect that learning constraints in a high-dimensional input space like images would be fundamentally difficult (for all methods) as different views of the same scene could correspond to the same level of constraint violation. However, for problems like self-driving, it is common to perform policy learning on top of a fixed multi-modal scene encoder rather than end-to-end learning. We would expect our approaches to scale quite well to these relatively low-dimensional representations. Q2: In theory, it shouldn’t be any more challenging to have multi-agent environments. We were actually thinking of adding in some experiments of this flavor but, because solving multi-agent RL problems can be challenging in and of itself, chose not to as we did not want to introduce additional complexities. L 1/2: We believe our method could be easily applied on top of a pre-existing planning stack. One direction we are currently working with some collaborators on is the problem of learning costmaps for off-road driving. There, one could imagine using our method to, by comparing human drivers and the output of a standard planner, learn the set of obstacles in the scene the ATV should avoid. Of course, anything with real robots takes more than a week to implement, but we do believe our method is applicable to these sorts of problems. --- Rebuttal Comment 1.1: Title: Response to authors Comment: I thank the authors for their clarifications. I have no other questions and maintain my review score.
Summary: The paper proposes a novel inverse constraint learning (ICL) approach that leverages constrained reinforcement learning (CRL) and a game-theoretic formulation. Specifically, the paper shows that the game-theoretic view of inverse reinforcement learning (IRL) naturally extends to ICL, by forming the Lagrangian of CRL. The resulting single-task ICL algorithm is shown to Pareto-dominate the expert policy in terms of its expected cumulative reward and the constraint violation. However, the recovered constraint may be too conservative and generalize poorly under a new task. To address this issue, the framework is extended to incorporate expert demonstrations under multiple tasks that share the same constraint.The paper shows that the multi-task ICL framework is expected to approximately Pareto-dominate the expert policies. Simulation results suggest the efficacy of the approach in continuous and discrete MDPs. Strengths: * Unlike prior ICL methods, the proposed framework is built on the game-theoretic approach. This brings in multiple benefits: 1) the form of the reward function can remain general; 2) conditions on approximate Pareto dominance can be derived, which gives theoretical justification of the proposed approach; 3) the resulting algorithm is relatively straightforward to implement. * Figures 1 and 2 give a nice visual representation of the proposed approach. Algorithm 1, 2, and 3 are presented in a consistent and organized way that clarifies the differences among CRL, single-task ICL, and multi-task ICL. Weaknesses: * The most concerning point is that the paper provides no comparison to prior work in the experiment. Some important insights of the proposed approach is shared by the seminal work of [1]. Specifically, both approaches 1) observe that access to reward functions enables constraint learning by looking at the differences between safe demonstrations and unsafe optimal demonstrations; 2) can leverage safe demonstrations from multiple tasks. To strengthen the contribution of the paper, it should include qualitative and quantitative comparison to [1] in Section 4 (where applicable), to clarify the practical advantages of the proposed framework. This might involve evaluating the ICL algorithm in more realistic problems than the toy task of maze navigation, since the authors claim better scalability of their approach compared to [1] in realistic problems (in Section 2). In addition, the paper should give a more detailed description of [1] to better highlight the similarities and differences. * There are multiple places in the paper where a variable or an acronym is used without definition: 1) $c^*$ in Theorem 3.1; 2) “FTRL” in line 153. Also, some figures in Section 4 are confusing and hard to interpret. Specifically, I believe that the blue cells in Figure 5(b) and (c) represent start and goal cells. However, the same color is used in other sub-figures to represent the constraints. It is recommended to use different colors to represent the start and the goals so that readers can make sense of Figure 5 as a whole. For further points concerning the presentation of the paper, please see the questions below. [1] Chou, Glen, Dmitry Berenson, and Necmiye Ozay. "Learning constraints from demonstrations with grid and parametric representations." The International Journal of Robotics Research 40, no. 10-11 (2021): 1255-1283. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: * What is $c_i^k$ in Algorithm 3? Shouldn’t it be just $c_i$ since the constraint is shared across all the $K$ tasks? * The order of minimization and maximization is swapped between (6) and (7). Does the equality hold because of the standard minimax theorem in game theory? If so, are there any conditions imposed on the form of $J$ in order for the equality to hold? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: * The paper assumes access to demonstrations of the optimal safe policy for a given task. While the availability of safe demonstrations is a reasonable assumption,optimal demonstrations may be too demanding to ask for a human demonstrator in real-world settings [2]. It would be quite interesting to see how the proposed ICL framework handles suboptimal demonstrations; in theory, the objective function of equation (4) does not seem to require the optimality of the expert under the given reward function $r$. [2] Xu, Haoran, Xianyuan Zhan, Honglei Yin, and Huiling Qin. "Discriminator-weighted offline imitation learning from suboptimal demonstrations." In International Conference on Machine Learning, pp. 24725-24742. PMLR, 2022. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate that the reviewer found our figures / algorithm descriptions clear and that our algorithm seemed straightforward to implement – we think this bodes well for its application to real-world problems. Responding to the concerts raised: W1: Please see our global response for the relationship between our work and that of Chou et al. We would be more than happy to add the above discussion to our work (perhaps in the appendix as it is rather lengthy?). Please also see our global response on experimental complexity. W2: $c^*$ is the ground-truth constraint, as defined on line 127 – that being said, we would be happy to add a reminder of this point to the theorem statement. FTRL stands for follow the regularized leader, a standard no-regret algorithm – we apologize for not explaining this acronym and would be happy to add in a description. While we did label the start/goal and maze squares differently, we can see why their shared color would be confusing and would be happy to use a different color. Q1: You are totally right, we will update that in our next draft. Q2: Yes, this is allowed because of the minimax theorem. In the setup of the problem, we assumed the conditions for Sion’s minimax theorem to hold (convex / compact strategy spaces). Even more simply, in the finite policy / constraint class setting, we are solving a matrix game for which von Neumann’s original minimax theorem holds. L1: This is an excellent point we didn’t fully appreciate until the reviewer pointed it out – thanks! We don’t actually require the expert to be the optimal policy, only that they are as safe as we want our agent to be. Our Thms. 3.1 / 3.5 clearly allows for the learner to return a policy that out-performs that of the expert. We would be more than happy to re-word any claims to the contrary in our current draft. Please see our global response for experimental evidence that our method is able to take advantage of suboptimal demonstrations. --- Rebuttal Comment 1.1: Comment: We thank the reviewer for their thoughtful comments. As we get closer to the end of the discussion period, please let us know if we were able to answer all of your questions or if there is anything else we could clarify.
Summary: This paper proposes to use inverse constraint learning (ICL) in multi-task scenarios. It uses game solving to describe the ICL problem, and then illustrates the learning algorithm under the single-task and multi-task problems. The authors give a theoretical analysis to explain under what conditions the learned reward can be generalized. The author shows the correctness of the proposed algorithm in both single-task and multi-task control experiments. Strengths: - The motivation of this paper is clear and reasonable to extract common safety-constraint from multi-task. - The paper considers whether the learned reward and constraint can be generalized theoretically, and gives two conditions and related proofs. Weaknesses: - The paper did not give a comparison with the ICL algorithms mentioned in the related work, and did not show the advantages of the algorithm in both learning or safety concerns. - The experimental settings of the paper is too simple, and only consider two constraints that do not require complex approachs to learn. It makes the machine learning method shows no advantages on human priors. - The authors did not give the results of robotics control tasks (both in simulation and real-world) that really need safety-constraints, such as robotic manipulation with human. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - Can the author supplement the experimental results on more complex robotics control tasks? For example, consider such a manipulation task as follow. We hope that the force or speed of the robot arm will not be too large while picking up an object. Can the algorithm learn such complex safety-constraints? - The authors' experiment in multi-task scenarios essentially improves the diversity of samples through randomization. We generally don't think this is a real multi-task RL scenario because all tasks are navigation. Can the author explore diverse but related manipulation tasks (like push and pull an object) in the open-source taskset (such as [meta-world](https://meta-world.github.io/)) to show the effectiveness of the proposed method? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: It is good that the paper focuses on the safety problems of reinforcement learning for robotics. However I suggest that the authors use more realistic robotics tasks (at least in simulation) that really need to consider safety-constraint to verify the method. Regardless of the comparison with related works or the complexity of the task, the experiments in the paper are not solid enough to support good motivation and theoretical results. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their appreciation of our theoretically-grounded approach. Responding to the concerns raised: W1: Please see our global response for the relationship between our work and that of Chou et al. W2: We’re not exactly sure what a “human prior” is here, could the reviewer clarify? W3/Q1/Q2: If we understand correctly, all three of these bullet points are in essence asking the same question: “what evidence do you have that the proposed method can scale to tasks like manipulation around people? **We would ask the reviewer to first read our global response on experiment complexity.** At a high level, our focus in this paper is not robotic manipulation specifically, but on learning with constraints more abstractly. Thus, we would argue that a lack of experiments in this particular domain is not a sufficient reason to reject our paper. As we point out in our global response, much of the prior work in inverse constraint learning focuses on tabular problems, while most of the work in constrained reinforcement learning focuses on relatively low dimensional tasks. So, we would argue that compared to the prior art, we are considering strictly harder settings. While we agree that focusing on manipulation specific applications would be interesting, our current experiments are equally high-dimensional (e.g. for our ant experiments, the state-space is 27 dimensional and the action space is 8 dimensional, compared to a 7 DoF manipulator) and we show results where are able to recover workspace constraints. More explicitly, given that we can learn a position constraint for a high-dimensional ant agent, we should also be able to learn constraints on the end effector of a manipulator). Similarly, if we are able to handle data that comes from the agent navigating to different parts of the maze, placing a block in different locations / goals should not be that different. Hence we see no conceptual reason why the algorithm would not scale to manipulation. --- Rebuttal Comment 1.1: Title: Please clarify "human prior" Comment: Dear Reviewer 3RKE, In order to give the authors an opportunity to address your W2, can you briefly respond to their question regarding a "human prior"? Thank you very much! --- Rebuttal Comment 1.2: Comment: I believe that my concern is because I am lack of experience on this domain (safe RL). So till now I understand that the single-task experiment is showing that, if there is an unknown position or velocity constraint (defined by a linear function), the policy can learn not to break the constraint? The human prior that I have mentioned is that, the position and velocity constraints can be fully considered and easily learned with simple reward shaping. But if the safe RL area aims to set a task with the totally unknown constraints and only shows the training process can discover the constraints, I agree that it is too strict to evaluate this paper in a much more realistic view. I can raise my score due to that I did not understand the safe RL area, and the rebuttal has addressed my main concern. But I am also willing to see the future work on the real robotic tasks that really face to the safety problems. --- Reply to Comment 1.2.1: Title: Re: Comment: Thanks! Yeah, the single-task experiments show that we are able to learn safe policies without knowledge of the ground-truth constraint.
Summary: Broadly, the paper address the challenge of safety constraints for robots, in particular the difficulty of handcrafting these. They propose learning safety constraints from expert demonstrations, particularly in a multi-task setting where each task reward is known and the safety constraints are task-agnostic. They review related work on inverse RL (IRL), constrained RL (CRL), inverse constraint learning (ICL), and multi-task ICL. Compared to existing ICL approaches, they claim their approach is more general, provides more guarantees, and is simpler to implement. They extensively formalize ICL, including explaining relevant approaches to IRL and CRL. As a brief recap, they compare ICL to IRL: IRL has an outer loop that searches over potential rewards, with an inner loop that uses RL to find an optimal policy. ICL has an outer loop that searches over potential constraints, with an inner loop that uses CRL to find a safe-optimal policy. In either case, the optimized policy is compared to the expert demonstrations to revise the learned reward or constraint. They also explain a challenge with single-task ICL, namely that ICL may learn an overly-conservative safety constraint that simply forbids all states not visited by the expert demonstrations. This motivates their exploration of multi-task ICL, where the safety constraint is assumed to be shared among the tasks. Their primary novel contribution of the paper is an extension of a single-task ICL approach to multi-task: "we alternate between solving K CRL problems and updating the constraint based on the data from all policies". Using it, they prove that they can exactly identify a shared constraint given enough expert policies, under certain very strong assumptions. They also prove a weaker but more practical theorem: if they have enough tasks, they can learn a constraint that, if used to optimize a policy, leads to policies that don't exceed the constraint-violation of the expert demonstrations and do meet or exceed the reward of the expert demonstrations (within a tolerance). For the single-task case, they evaluate their approach in the simulated ant environment in PyBullet and MuJoCo benchmarks. They choose arbitrary velocity and position constraints. They represent their learned constraints as neural networks, mapping from the state space of their agent to a bounded scalar in the range [0, 1]. They show that their approach can recover these ground-truth constraints, under certain simplifying assumptions (linear constraints; usage of a subset of the agent state as input). For multi-task, they use the PointMaze environment from D4RL, with each task variant having one of two start positions and one of 10 goal positions. Using their approach, they are able to approximately learn the ground-truth constraints (namely the maze's blocked cells). They also show that naive approaches for combining 10 separate learned constraints from single-task ICL don't work well. Strengths: Their premise, that safety constraints are difficult to engineer and should be task-agnostic, is accurate and relevant to this domain. Their approach is a straightforward extension of an existing single-task ICL approach to multi-task. It works well and matches the intuition that safety constraints should be task-agnostic. Rigorous math in section 3 to formalize their work. Weaknesses: In section 3, it's not clear what is prior work and what is their novel contribution. In 3.1, the authors copied several sentences verbatim from "Inverse Reinforcement Learning without Reinforcement Learning" (https://arxiv.org/pdf/2303.14623.pdf, section 3.1). The use of copied phrases like "we solve" suggest the authors, themselves, did this work. In their single-task and multi-task results, they don't compare to any baselines, for example "Learning constraints from demonstrations" from Chou et al. For readability, it would be helpful if they numbered references. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: Section 3.4 and 3.5 mention needing expert policies. However, the premise of your work is that you have access to expert demonstrations and reward functions, but not expert policies. Can you clarify? A suggestion for positioning this line of work: instead of "safety constraints", I might argue that a more broad (and thus more valuable) set of constraints to be learned from human demonstrations is "task-agnostic human-behavior constraints", which would encompass safety plus other conventions and preferences. E.g. coming back to the example in your introduction, if I ask my friend to make toast, I'd be surprised if they set a plate down upside-down or rinsed the bread under the faucet. >we consider a setting in which we have access to demonstrations of the optimal safe policy for a task, along with knowledge about the task’s reward Can you describe this setting in practical terms? It seems that your approach has stronger requirements, namely (1) access to not just the rewards received by the demonstrations, but access to the reward function itself, and (2) access to a simulator or other mechanism to train a policy using CRL. As a clarifying example: if you had access to expert trajectories of a real-world robot including full scene state and reward annotations, across multiple tasks, this alone would not be sufficient to learn safety constraints using your method, right? >In short, if we observe enough tasks, we are able to learn a constraint that, when optimized under, leads to policies that approximately Pareto-dominate that of the expert. Can you elaborate? What is the specific Pareto-dominance here? My understanding is: you can learn a constraint that, if used to optimize a policy, leads to policies that don't exceed the constraint-violation of the expert demonstrations and do meet or exceed the reward of the expert demonstrations (within a tolerance). However, I don't intuitively follow why this should be your goal for the learned safety constraint. For Figure 2, can you offer an intuitive explanation? For example: = can you label the axes? = why is an expert policy a single point in this space? = what does the boundary of the polytope represent? = what does this mean? "diversity of expert policies, none of which are optimal along the reward vector" Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: No concerns here Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for an incredibly detailed summary of our work which evinces a very thorough reading of our work. W1: Sec. 3.1 and 3.2 are standard algorithmic techniques. We would be happy to add something like “Prior Work” to the subsection titles. W2: If we understand correctly, the reviewer is referring to lines 102-104 and 111-115. These are standard sets of definitions (i.e. 102-104 is the definition of an MDP, 111-115 is the standard description of game-theoretic IRL). Given we are describing prior work in this section, we chose to stick to the notation / assumptions (e.g. convexity / compactness of strategy spaces) that is common in the literature. We also note that there are differences (e.g. we search over stationary policies) from the particular paper the reviewer mentions. W3. Our lack of comparisons to baselines in single-task ICL is because there are limited prior methods for this problem that scale to continuous control tasks (e.g. McPherson et al. assume we can perform value iteration on a tabular MDP). We compare to several baselines in the multi-task setting (e.g. the average / max of the learned constraints) that show that using multi-task data is critical for successful constraint learning. See the global response for the connection to Chou et al. W4: Sure! Q1: We apologize for any confusion: our methods definitely do not require a DAgger-style interactive expert. We believe the reviewer is referring to terms like those that appear in Equation (12). This is mostly a notational issue: given access to trajectories from some $\pi$, we can evaluate the value of the policy $J(\pi, r)$ for an arbitrary reward function $r$ without query-access to the policy by simply re-labeling the demonstrations with the reward function of interest . $\rho_{\pi_E}$, the visitation distribution of the expert, once again does not actually require query access to the expert policy, so we can evaluate the expression in Equation (13). Q2: This is an interesting point that gets at the difference between rewards and constraints. We usually think of a reward term as something with a fixed weight across tasks that gently shapes behavior. In contrast, a constraint is a term in your cost function that can be arbitrarily scaled up until the learner is no longer violating it. We would think that conventions that can be sometimes violated (e.g. setting the plate upside down to let the bottom dry off) to fit more under the purview of rewards. That being said, it is an interesting question as to whether our method would end up learning conventions. Q3: In the inverse reinforcement learning literature, access to demonstrations and the environment is standard. However, once we move to the space of constraints, we require an additional piece of information: the expert’s reward. Intuitively, the reason knowledge of the reward function is important is that it allows us to distinguish between an action not being taken because it was a) unsafe and b) suboptimal. Without this piece of information, we would have to assume that *all* untaken actions were unsafe and would therefore likely recover an overly conservative constraint. We note that *all* prior published work on ICL that we are aware of also makes this assumption. Q4: Your understanding of what we meant by “Pareto dominate” is entirely correct. When thinking about learning constraints, one could either a) attempt to recover the ground-truth constraint or b) try to learn a constraint that allows the learner to act safely / performantly. As we discuss in Sec. 3.5, the former can be an unreasonably challenging goal for realistic problems as it requires a high diversity and number of expert policies. However, this is not a problem unique to ICL: it also shows up in inverse reinforcement learning. In IRL, there can be multiple reward functions that make the expert policy optimal (e.g. a constant reward of zero). Thus, we try to learn a reward function where, if we computed the optimal policy under it, we would be guaranteed to have a bounded sub-optimality with respect to the expert policy under the ground-truth reward function. In essence, this is what the game-solving procedure guarantees us. In the ICL setting, we instead want to learn a constraint that explains the expert behavior (e.g. one that forbids highly rewarding but untaken behavior). The guarantees we get (that we learn a constraint that allows our learner to act safely / performantly) are analogous to the IRL guarantees and are in essence the strongest we could hope for without restrictive assumptions. This is part of what distinguishes our analysis from that of Chou et al., who focus on constraint recovery guarantees. Q5: We would be more than happy to add more description of this figure. At a high level, $\rho_{\Pi}$ refers to the space of occupancy measures (i.e. the set of state-action distributions of all policies in our policy class). The reason we chose this representation was that a constraint is simply a hyperplane in this space (the red line labeled $\langle \rho_{\pi}, c^* \rangle = 0$). To uniquely determine a line in $\mathbb{R}^d$, we need $d$ points on the line – these are what the green dots / expert policies $\pi_E^1$ and $\pi_E^2$ correspond to. We need two policies as our visitation distributions are two-dimensional (e.g. a two-state MDP). The technical conditions in Lemma 3.2 are the more formal version of the above statements (e.g. relint meaning the points are on the line rather than on the boundary of the space). Each corner of the polytope represents a policy, while the boundaries represent a convex combination of two policies. “Diversity of expert policies” means distinct policies and “not optimal along the reward vector” refers to their being a policy that performs better than the expert under the reward function (otherwise, we don’t need to solve a constrained problem at all and therefore the expert policy is not useful for extracting a constraint). --- Rebuttal Comment 1.1: Comment: Thanks for your rebuttal and apologies for the lateness of my reply. >variety of baselines (e.g. the max, mean, or individual single-task constraints) These seem like weak baselines. It's not clear that they are "prior state of the art" or otherwise a well-regarded existing approach for recovering shared safety constraints from multiple tasks. Do you have any references to prior published work that utilize/advocate any of these approaches? I will continue reviewing your rebuttal and post a longer response later today. --- Reply to Comment 1.1.1: Title: Re: Comment: Thanks, we look forward to your full response! Re: the multi-task baselines: to the best of our knowledge, there is only one other published work that considers the multi-task ICL problem, which is that of Chou et al. As we discussed in our global response, re-implementing their method requires access to several custom solvers that differ based on the particular family of constraints one wishes to search over, making it difficult for us to directly compare. We therefore turned out attention to single-task baselines, of which we compare to several reasonable ones. --- Rebuttal Comment 1.2: Comment: >[Chou et al] did not release their code, it would be quite an undertaking for us to re-implement their methods on the problems we consider. As I'm sure you would agree, this (very) inconvenient fact doesn't exempt your work from comparison against strong baselines. >much of the prior work in inverse constraint learning focuses on tabular problems, while most of the work in constrained reinforcement learning focuses on relatively low dimensional tasks. So, we would argue that compared to the prior art, we are considering strictly harder settings. Unfortunately I don't have enough confidence in this domain to agree/disagree here. >It would be interesting to determine whether the proposed method can work for tasks beyond maze locomotion, which lack complex interactions with the environment. I agree with this reviewer's comment. In particular, the complexity I'd like to see is not just high dimensionality, but also long horizon length (e.g. avoiding a hot stove until it has cooled) and low predictability (e.g. keeping a larger distance from a pet or child that might move erratically). I'm not able to increase my rating. I suspect that a proper review of your work requires a deeper understanding of IRL, CRL, and ICL than I can offer. There's a lot of information in your rebuttal and reviewer discussions that I wasn't able to absorb, so I'm going to lower my confidence score. --- Reply to Comment 1.2.1: Comment: We thank the reviewer for their candor. We'd like to respond on a few points. 1. Beyond the fact that it is "very inconvenient" to implement the method of Chou et al., it is also extremely easy for us to construct problems for which our method readily applies but one cannot apply the method of Chou et al. For example, if we make one of the blocked sections in the maze a *circle* instead of a square, one can no longer use the "axis-aligned rectangles" technique of Chou et al. to recover a reasonable constraint. It would be rather shocking to us if a deep neural network could not fit this circle. Furthermore, Chou et al.'s method requires knowing the constraint family (e.g. "axis-aligned rectangles") a priori, while ours does not. This makes our techniques far more easily applicable to real-world problems where one does not usually know the exact parametric family of the constraint they are trying to fit, a limitation acknowledged by Chou et al. 2. The *minimum* horizon task we consider is of length 1000, while our maze task goes up to several thousand. So, we would argue that we are already learning constraints on long-horizon behavior. If there are any pieces of our above responses or our responses to other reviewers that could benefit from a more in-depth explanation, please let us know. We thank the reviewer for their engagement with our work!
Rebuttal 1: Rebuttal: We thank all reviewers for their carefully considered feedback. **Limited Experiments:** We would like to begin by noting that, in comparison to many of the standard benchmarks in safe RL, our considered tasks are higher dimensional (e.g. our ant-based tasks are higher dimensional than every task in the standard Safety Gym benchmark) and we are successfully solving them without knowing the ground-truth constraint. Furthermore, much of the prior work in ICL has performed experiments purely on gridword problems [Vazquez-Chanlatte et al., Scobee and Shastry et al., McPherson et al.] or used simple constraint function classes like axis-aligned rectangles [Chou et al.], while we perform experiments with deep neural networks on continuous control problems. That being said, we agree with the reviews that it would only make our paper stronger if our multi-task experiments were of same dimensionality as our single-task experiments and therefore implemented our algorithm on the D4RL AntMaze task (rather than the PointMaze task we included in our initial draft). On this significantly harder problem, we were able to recover all of the walls correctly (within a single iteration) and therefore learn policies that match expert performance and safety. We also performed multiple iterations of ICL to show that our learned constraint is stable (and therefore does not require the early stopping that single-task ICL approaches need). We believe this is strong evidence that our method is able to scale to complex, high-dimensional, multi-task problems. Furthermore, we note that our multi-task method is the only method which correctly recovers the ground truth constraint compared to a variety of baselines (e.g. the max, mean, or individual single-task constraints) which are overly conservative, highlighting how our focus on the multi-task setup has produced a more practical method than the prior, single-task art. See Figures 5/6 in PDF for full results. **Relationship to the work of Chou et al.:** As we pointed out in our related work section, our problem setup is quite similar to the setup of Chou et al., in that we assume access to both safe demonstrations and task rewards. However, algorithmically, we believe our techniques are more general. Chou et al. use variants of hit-and-run sampling to generate low-cost, safe trajectories (a non-standard algorithm for non-convex control problems), while we make no assumptions about the particular constrained RL method one uses. This allows us to learn deep feedback control policies, rather than open-loop trajectories. Their “gridded” constraint learning methods require the use of integer / mixed-integer program solvers while their “parametric” constraint learning methods require restrictive constraint classes (e.g. “axis-aligned hyper-rectangles”) or solving mixed integer feasibility programs. In contrast, we only require the ability to solve a classification problem, making it clear how to learn deep constraint networks using our method. This is why we described our approach as being more “likely to scale to realistic problems” in our writeup: we can leverage flexible function approximators and modern deep reinforcement learning algorithms. As Chou et al. wrote in Sec. 8.4 of their paper, “To scale to high-dimensional constraint spaces, we assume a known, relatively simple parameterization, which is in general not the case for real constraints.” We also note that, given the nonstandard constrained policy optimization and constraint learning methods Chou et al. propose using and the fact that they did not release their code, it would be quite an undertaking for us to re-implement their methods on the problems we consider. We are able to prove policy performance / safety guarantees for policies learned via our recovered constraint, while we were unable to find the analogous guarantees in the work of Chou et al. In essence, the theory of Chou et al. focuses on constraint recovery which, as we discuss in Sec 3.5, requires a very large number of diverse demonstrations, outside of problems with the relatively simple constraint structures they consider. On a more technical level, we do not require the restrictive Lipschitzness assumptions they make in their analysis section (which again reflects the fact that they are focused on the impossibly high goal of perfect constraint recovery). Also, in theory, Chou et al. require “boundedly suboptimal” demonstrations, while we, as a reviewer points out, do not. In short, we believe that our writeup captures what is essential to the multi-task inverse constraint learning problem, and allows the reader to pick the particular set of solvers that are most effective on the particular instance they are interested in. Furthermore, we are also able to provide strong guarantees on the performance / safety of the policy optimized under the learned constraint, while Chou et al. seem to focus on a goal that is somewhat of a red herring on realistic problems. **Expert Optimality Requirement:** An insight we did not fully appreciate until it was brought to our attention by a reviewer was that nowhere in our formalism do we actually require the expert to be the (soft) optimal policy under the ground truth constraint, as all prior works we know of in the ICL domain [McPherson et al., Chou et al.] do. Of course, if the expert data is extremely suboptimal, it is hard for any method to infer a reasonable constraint. However, in contrast to prior work, our theory does not break down even in this case (as we focus on policy performance / safety rather than perfect constraint recovery) and still provides rigorous guarantees. To empirically validate this point, we redid our single-task experiments using an expert that is far from optimal on the task. Using this data, we are able to learn a constraint that induces a policy that is as safe and significantly more performant than the expert. See Figures 7/8 PDF for full results. Pdf: /pdf/18ba784b26a69060111b0a3df9ed61e7329e6c8b.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: This paper addresses the challenge of learning safety constraints for agents in various tasks from expert demonstrations, instead of manually specifying them. The authors extend inverse reinforcement learning (IRL) techniques to the space of constraints, aiming to learn constraints that prevent highly rewarding behavior that the expert could have performed but avoided. However, the constraint learning problem is ill-posed and tends to result in overly conservative constraints. To mitigate this issue, the authors utilize diverse demonstrations from multi-task settings to learn a tighter set of constraints. The method is validated through simulation experiments on high-dimensional continuous control tasks. Strengths: - The authors develop a multi-task variant of inverse constraint learning, which consists of multiple policy players aiming to maximize task-specific rewards and a constraint player determining a single constraint that all policy players must follow. - The proposed approach reduces the chances of choosing degenerate solutions. - The inverse constraint learning problem is formulated as a zero-sum game involving a policy player, who seeks to maximize rewards while adhering to potential constraints, and a constraint player, who selects constraints that impose maximum penalties on the learner compared to the expert. - The proposed method demonstrates its effectiveness in various continuous control tasks. When applying restricted function classes, the technique can retrieve ground-truth constraints for certain tasks. Weaknesses: The author argues that previous ICL problems have not been well-formulated, leading to solutions that only allow expert actions. However, it is not clear how the proposed method addresses this issue, aside from utilizing a greater number of demonstrations from a multi-task setting. It would be interesting to determine whether the proposed method can work for tasks beyond maze locomotion, which lack complex interactions with the environment. For example, in manipulation tasks, safety constraints are more critical as the robot must interact with and alter the environment. Technical Quality: 3 good Clarity: 3 good Questions for Authors: The author could better address the concerns mentioned in the weaknesses section. It is worth noting that the reviewer is not well-versed in this field and does not specialize in IRL or ICL problems. Thus, the overall rating may be subject to change based on feedback from other reviewers and the author's rebuttal. Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Consider removing Section 4.2. A more intuitive explanation of Figure 2 would be helpful. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their thorough description of our work. Responding to the concerns raised: W1: Intuitively, if we see a more diverse set of behaviors from the expert by using multi-task data, our constraint learner will spuriously forbid fewer states. Importantly, it is not the greater number of demonstrations that matters for dealing with ill-posedness so much as it is the greater diversity of behavior. For example, building on the reviewer’s manipulation example, if we only see data from a single pick-and-place task, single-task ICL could recover a constraint that prevents the learner from placing an object anywhere else on the table. As long as these sort of “complex interactions with the environment” are encoded in the state of the agent (i.e. we are not in the partially observed setting), we believe our approach would be applicable. Please also see our global response on experimental complexity. While we agree that focusing on manipulation specific applications would be interesting, our current experiments are equally high-dimensional (e.g. for our ant experiments, the state-space is 27 dimensional and the action space is 8 dimensional, compared to a 7 DoF manipulator) and we show results where are able to recover workspace constraints. More explicitly, given that we can learn a position constraint for a high-dimensional ant agent, we should also be able to learn constraints on the end effector of a manipulator). Similarly, if we are able to handle data that comes from the agent navigating to different parts of the maze, placing a block in different locations / goals should not be that different. Hence we see no conceptual reason why the algorithm would not scale to manipulation. Figure 2: We would be more than happy to add more description of this figure. At a high level, $\rho_{\Pi}$ refers to the space of occupancy measures (i.e. the set of state-action distributions of all policies in our policy class). The reason we chose this representation was that a constraint is simply a hyperplane in this space (the red line labeled $\langle \rho_{\pi}, c^* \rangle = 0$). To uniquely determine a line in $\mathbb{R}^d$, we need $d$ points on the line – these are what the green dots / expert policies $\pi_E^1$ and $\pi_E^2$ correspond to. We need two policies as our visitation distributions are two-dimensional (e.g. a two-state MDP). The technical conditions in Lemma 3.2 are the more formal version of the above statements (e.g. relint meaning the points are on the line rather than on the boundary of the space). --- Rebuttal Comment 1.1: Comment: We thank the reviewer for their thoughtful comments. As we get closer to the end of the discussion period, please let us know if we were able to answer all of your questions or if there is anything else we could clarify.
null
null
null
null
null
null
Semantic segmentation of sparse irregular point clouds for leaf/wood discrimination
Accept (poster)
Summary: The authors proposed a method for semantic segmentation applied to unmanned aerial vehicle (UAV) laser scanning (ULS), namely SOUL, to discriminate leaf from wood points. It is based on PointNet++ with an additional sampling scheme and an innovative training loss function to handle the high imbalance between the classes. The SOUL method relies on the coordinates of the points to increase its range of application to other forests and other sensors. It also includes 4 point-wise geometric features computed at 3 scales to characterize each point. The geodesic voxelization decomposition (GVD) is also introduced as a preprocessing method to partition the ULS data while preserving the topology of the point cloud. Experiments have been conducted on a dataset recorded in a French Guiana tropical forest. The proposed method has reached the best results on 3 over 6 evaluation metrics, including metrics adapted to unbalanced datasets. The approach has also been qualitatively tested on open source datasets recorded in Australia and Germany with open source datasets showing a potential generalization on other forests and other LiDAR sensors. Strengths: 1/ The application of semantic segmentation to LiDAR recordings of forests is a high priority for climate change and global warming understanding and mitigation. This original work could have a potentially high impact for reforestation/afforestation monitoring and carbon stock estimation. 2/ The proposed SOUL method is the first to be adapted to the density and unbalance of ULS forest recordings. The proposed GVD preprocessing method is also relevant for other LiDAR sensors and type of forests. 3/ The authors have successfully performed experiments on a datasets recorded in a tropical forest in French Guiana, showing best results compared to a few competing methods and according to metrics specifically adapted to unbalanced datasets. Additional qualitative experiments conducted on unannotated datasets have shown a potential generalization on other forests and other LiDAR sensors. 4/ The paper is well written and motivated with relevant arguments. Weaknesses: 1/ There is a lack of related works and comparisons using other point cloud architectures which could have led to better performances [1, 2]. The selection of PointNet++ has been motivated by L127 "the lower GPU requirements compared with transformer-based models developed in recent years". The model should be selected as a trade off between the GPU consumption and the performances of different architecture. These experiments would have been appreciated to support the argument. 2/ An ablation study of the proposed geometric features would have been appreciated. What is the impact of each proposed feature? What is the actual impact of these features against using the raw point cloud? Is a model such as PointNet++ capable of estimating these geometric characteristics internally? 3/ Even if the LiDAR point cloud is affected by atmospheric characteristics, it would have been interesting to see the performances of the proposed method using the reflectance as an additional feature per point. 4/ The lack of experiments, in particular with competing methods, ablation studies and standard deviations (see next section), makes the submission questionable as to the significance of the results. [1] Y. Guo et al., Deep Learning for 3D Point Clouds: A Survey. In TPAMI 2020. [2] B. Fei et al., Comprehensive Review of Deep Learning-Based 3D Point Cloud Completion Processing and Analysis. In TITS 2022. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: Questions: 1/ Will the dataset be release publicly? Even though the code is available, the only numerical results are presented on the ULS dataset in French Guiana. For reproducibility reasons (both training and testing), the final rate of this submission will be conditioned to the release of the dataset which is not mentioned in the paper. Note that it would be a noticeable contribution since it will be the only annotated ULS dataset for semantic segmentation. 2/ Footnote 3, P7 "MCC is a very good metric for evaluating binary classifier against class imbalance issue." What "very good" means in this context? 3/ It is not perfectly clear how the annotations of the French Guiana forest have been built. Do you have any additional comment on this topic? Comments: 1/ There is no reference made in the core text to the appendices, and vice versa, making the bridge difficult for the reader. These documents should not be considered independently, both should refer to each other. 2/ A focus of qualitative and quantitative results at the limits of the branches would have been interesting to conduct since it concentrates most of the uncertainties. It would show that the model is not learning just a smart threshold on the z axis to discriminate the trunk from the leaves. 3/ Please be more specific than "partitioned" (L227) to qualify a spatial split. This information is only clearly available in the video of the Appendices. 4/ Typo: it would be nice to have the full name of the algorithm appearing in the title of Algorithm 1 in P5. 5/ Table 1: if the top 1 results are in bold per columns, all top 1 results should be highlighted, included results from competing methods. 6/ Typo: L290 "Quantitative results demonstrated are shown in Figure 6." These are qualitative results. 7/ Appendices L43 "It is evident 43 that SOUL outperforms all other methods on ULS data and achieves state-of-the-art performance on 44 metrics such as Specificity, Balanced Accuracy and G-mean." It is not straightforward since the standard deviations for the competing methods have not been provided, neither in Table 2 nor in Table 3 (Appendices). It is not possible to evaluate if the variance of the other methods include the average performance of the proposed one. Note that the methodology used to create the intervals are not usual in this submission, but still relevant. A more common practice is to estimate the standard deviation of the performances of several models with different initializations of the network. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: The limits of the presented method has been addressed by the authors. The potential negative societal impacts have not been mentioned (e.g. military applications, UAV surveillance). The release of the dataset has not been mentioned which is a major limitation since the proposed method has been designed and validated on it and the only numerical results are based on its annotations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your acknowledgment of our effort and the very helpful comments. Q1: Will the dataset be release publicly? ... We will release the labelled ULS data publicly along with the SOUL code, since this kind of data is still extremely rare indeed. It should make a useful contribution to the community of experts working on point cloud semantic segmentation of natural environments. Q2: For metric MCC, what "very good" means in this context? MCC = (TP * TN - FP * FN) / sqrt((TP + FP) * (TP + FN) * (TN + FP) * (TN + FN)) For binary classification, the calculation of the Matthews correlation coefficient (MCC) metric uses all the four quantities (TP, TN, FP and FN) of the confusion matrix, and produces a high score only if the prediction obtained good results in all the four confusion matrix quantities[1][ 2], proportionally both to the size of positive elements and the size of negative elements in the dataset. (see section Conclusions in Davide Chicco and Giuseppe Jurman [1]). This is exactly what we need in our context. Q3: It is not perfectly clear how the annotations of the French Guiana forest have been built. Do you have any additional comment on this topic? Annotation was done on high-resolution Terrestrial Laser Scanning point cloud acquired simultaneously. The labels were subsequently transferred to the less dense ULS point cloud. The procedure used to annotate the TLS point cloud follows the description given in Martin-Ducup et al.[ 3] under their section entitled "Human assisted generation of control data". Response to weakness: (1) There is a lack of related works ... The Table 6 in Choe et al.[4 ] give such pertinent information. PointNet++ exhibits advantages in terms of memory consumption and training speed, with the results being relatively comparable. While I am inclined to conduct further experiments about the GPU consumption and the performances of different architecture, but resource constraints currently limit our capacity to do so. In essence, transformer-based architecture will do a lot of dot products to compute a sort of similarity score between the query and key vectors. This nature inherently leads to slower processing speeds. (2) An ablation study of the proposed geometric features ... We can share the results of an experiment which compares the full model with downgraded model versions where only a single geometric feature was used at a single scale (see Figure 4 and Table 1 in rebuttal_figures.pdf). Those results clearly show the benefits of including precomputed multiple geometric features at multiple scales. We think linearity and verticality contribute more to trunk recognition, whereas PCA1 and Sphericity are more efficient inside canopy. PointNet++ is able to learn more sophisticated features, but for empirically chosen geometric attributes, it may be not capable to learn identical ones. (3) ...using the reflectance as an additional feature per point. Indeed, reflectance may in some circumstances help discriminate leaf from wood and was used for instance in Wu et al.[5]. However, this makes the model sensor specific as different wavelengths are commonly used in different sensors. The main reason we decided not to use reflectance-derived information is its high variability and lack of specificity. In fact, reflectance cannot discriminate leaf from wood reliably at 905nm the wavelength at which our sensor operated (see for instance Figure 2 in Brede et al. [6]). In addition, multiple returns (pulse fragmentation), unknown object orientation, and surface wetness all contribute to adding noise to the apparent reflectance associated with each return(for a discussion of that point please see for instance Vincent et al. [7]). (4) The lack of experiments with competing methods... While we only compare our method to two others, the latter are commonly recognized as the best alternatives in the context we are dealing with (semantic segmentation of forest point clouds). A comparison extended to other DL methods with different architectures, which have no proven record of performance for the targeted task would be feasible, but we just did not have the resource to try that. Response to comments: We will correct all the typos accordingly in the revised version, and thanks for the practice, we will use this common practice to estimate sd in the future. References [1] D. Chicco and G. Jurman, “The advantages of the matthews correlation coefficient (mcc) over f1 score and accuracy in binary classification evaluation,” BMC Genomics, vol. 21, 01 2020. [2] J. Yao and M. Shepperd, “Assessing software defection prediction performance,” in Proceedings of the Evaluation and Assessment in Software Engineering, ACM, apr 2020. [3] O. Martin-Ducup, I. Mofack, Gislain, D. Wang, P. Raumonen, P. Ploton, B. Sonké, N. Barbier, P. Couteron, and R. Pélissier, “Evaluation of automated pipelines for tree and plot metric estimation from tls data in tropical forest areas,” Annals of Botany, vol. 128, pp. 753–766, 04 2021. [4] J. Choe, C. Park, F. Rameau, J. Park, and I. S. Kweon, “Pointmixer: Mlp-mixer for point cloud understanding,” 2022. [5] B. Wu, G. Zheng, and Y. Chen, “An improved convolution neural network-based model for classifying foliage and woody components from terrestrial laser scanning data,” Remote Sensing, vol. 12, no. 6, 2020. [6] B. Brede, H. M. Bartholomeus, N. Barbier, F. Pimont, G. Vincent, and M. Herold, “Peering through the thicket: Effects of uav lidar scanner settings and flight planning on canopy volume discovery,” International Journal of Applied Earth Observation and Geoinformation, vol. 114, p. 103056, 2022. [7] G. Vincent, P. Verley, B. Brede, G. Delaitre, E. Maurent, J. Ball, I. Clocher, and N. Barbier, “Multi-sensor airborne lidar requires intercalibration for consistent estimation of light attenuation and plant area density,” Remote Sensing of Environment, vol. 286, p. 113442, 2023. --- Rebuttal Comment 1.1: Comment: I would like to thanks the authors for their valuable rebuttal. Q2: thanks for the clarification, this should be included in the submission instead of "very good". W2: this ablation study is appreciated and should be integrated in the main submission document. It's an interesting to show that PointNet++ is no able to learn these features and could open up research in the application of more expressive backbones with comparable computational costs. General comment: since the annotated data will be release, the main submission document should identify it clearly as a contribution. Considering the new experiments and materials provided, I will increase my rating towards the acceptance. Note that this paper provides methodological and practical contributions for forest monitoring while including a unique annotated dataset to motivate research in this field. I would like to highlight once again a comment provided in my review in Strengths, 1/: "The application of semantic segmentation to LiDAR recordings of forests is a high priority for climate change and global warming understanding and mitigation. This original work could have a potentially high impact for reforestation/afforestation monitoring and carbon stock estimation." --- Reply to Comment 1.1.1: Comment: Thank you for your positive feedback, we are happy that you find our response useful. We will incorporate the content you mentioned in the revised version.
Summary: This paper proposes a dataset and an algorithm for 3D semantic segmentation in forest scenes. From data collection to the algorithm design, this paper covers the whole pipeline that are oriented for forest segmentation. In terms of the algorithmic part, the solution itself is not fully satisfied with me, but I enjoyed the geometric feature computation (Sec.3.1). Moreover in data pre-partitioning part, I understand the intention of such design and it makes sense to me. Strengths: This paper covers the whole pipeline, from data collection to algorithm inference (semantic segmentation), for tree segmentation. While most of the 3D semantic segmentation methods, PointNet++, Point Transformer (ICCV 21), PointMixer (ECCV 22), PointNeXT (Neurips 2022) only deal with 3D semantic segmentation within the limited indoor scenes (S3DIS dataset or ScanNet dataset), this paper newly introduce the tree/forest segmentation using LiDAR point clouds. Not just an algorithm part, this paper also covers data collection and data preprocessing such that this paper covers the whole pipeline for the forest segmentation task. Weaknesses: Honestly, I could not find the weakness of this paper. Nonetheless, I have a minor question. Can you conduct an ablation study for the rebalance loss? If possible, I want to see the quantitative/qualitative result based on this loss design. Except that question, I am fully okay with this paper. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Please check the weakness section above. Though I wrote down not much contents on this review, I think that this paper is quite solid and reasonable. I really enjoyed reading this submission. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: It's fine with me. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your very positive and supportive review. We truly appreciate your acknowledgment of our efforts. Q1: Can you conduct an ablation study for the rebalance loss? If possible, I want to see the quantitative/qualitative result based on this loss design. Since the class imbalance problem is vital, we do early replace the cross entropy by focal loss, and we are certain that the cross entropy version will generally predict all the points as leaf. You can see the qualitative result of the focal loss version by referring to Figure 3 in rebuttal_figures.pdf. The qualitative results can be found in Table 1 of the rebuttal_figures.pdf. --- Rebuttal 2: Title: Post-rebuttal evaluation. Comment: Thank the authors for the rebuttals. I also read the whole reviews and the corresponding rebuttals. I still think that this paper has its own contribution. While the reviewers, __eT4d__ and __uJjq__, tackle the (technical) novelty of this paper, I believe that the strength of this paper is more dependent on its unique problem setup, as the authors claim in the rebuttal, _Our paper is in line with "Machine learning for sciences (e.g. climate, health, life sciences, physics, social sciences)" of NeurIPS inviting topics, and we believe our work aligns well with the conference's interdisciplinary focus._. In my opinion, I understand the worrying points addressed by the reviewers who are against this paper. I also have experience in this field so the technical details and the proposed loss need to be carefully examined as an ablation study using the widely-known datasets, such as S3DIS. However, since this paper typically focuses on the 3D semantic segmentation in forest scenes, I believe that such strict analysis looks redundant to judge the novelty of this paper. __Let me vote for acceptance__. I really enjoyed reading this paper. --- Rebuttal Comment 2.1: Comment: Thank you very much for your additional comment. We are very grateful for your support and enthusiastic about your analysis.
Summary: They describe an approach for automatically segmenting a LIDAR scan of a forest into wood and leaf points. They train a PointNet++ model and use resampling to address the extreme class imbalance in the data. In comparison to previous methods they achieve a much higher balanced accuracy on their dataset. Strengths: The approach they propose is novel to the best of my knowledge. The presentation is fairly detailed and clear. According to their experimental results (Table 1), they greatly outperform previous methods in terms of specificity and balanced accuracy. Weaknesses: There is probably not enough of a contribution here in terms of machine learning methods for this paper to be appropriate for NeurIPS. The paper is rather narrow in scope and specific to the application of wood-leaf segmentation. Furthermore, they combine existing techniques such as extraction of PCA features, PointNet++, and resampling to handle imbalanced data. As an application study, I would see this type of work as more appropriate for an applied machine learning conference / journal or a forestry / ecology journal. Also, PointNet++ has been used before for wood-leaf segmentation -- this paper was not cited: Xi, Zhouxin, et al. "See the forest and the trees: Effective machine and deep learning algorithms for wood filtering and tree species classification from terrestrial laser scanning." ISPRS Journal of Photogrammetry and Remote Sensing 168 (2020): 1-16. Comments on presentation: * L149 need spaces in the vector notation [0 0 1] (you can do $[0 ~ 0 ~ 1]$ for example) * Eq. 7: if the voxels are adjacent, wouldn't the manhattan distance be 1 anyway? How is "adjacent" defined? * Figure 3: the numbers in the color bar are too small to read * Eq. 9: the loss doesn't seem to be properly defined. When y_k = 0, the loss term = 0? * L254: "big trunk" -> "large trunks" * Figure 5, 6: what do the colors mean here? The statement "any other approaches developed for dense point cloud are ineffective" is too broad. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: It was not clear to me why they needed to first cluster the tree into large segments; perhaps they could better explain the motivation for this. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: Some limitations were discussed but there was not a section specifically labeled "limitations." Ethical implications were not discussed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for very helpful comments, and thank you for bringing this recent work[1] to our attention. We would add the discussion and cite the paper in the final revision. Q1: It was not clear to me why they needed to first cluster the tree into large segments; perhaps they could better explain the motivation for this. The ULS data often covers several hectares, or even hundreds of hectares. The situation in tropical forests is also highly complex, with various types and sizes of trees densely packed together. The significant memory demands make it nearly impossible to process all the data at once, leading us to adopt a spatial split schema approach as a necessary compromise. We can select data randomly from the whole scene, but we agree that selecting data randomly can result in a sparse and information-poor sample. Or we can employ a divide and conquer strategy to handle the chaotic, big volume, and complex ULS data. That’s why we propose GVD, a method that involves breaking down the data into more manageable subsets (refer to Figure 1 in rebuttal_figures.pdf), allowing us to handle the intricacies and extract meaningful insights in a more systematic manner. This approach enables us to retain the information-rich aspects of the data while overcoming computational challenges associated with the sheer volume of information. In our global response to all the reviewers, we have introduced an alternative spatial split scheme called the "sliding window". This method involves dividing the entire scene into multiple overlapping cuboids. Within each cuboid, we select training, validation, and testing samples. While this isn’t a strict ablation study of GVD, but this allows us to observe the clear impact of border effects (refer to Figure 2 in rebuttal_figures.pdf) and the disruption of spatial information in point cloud data. Response to weakness & comments: (1) weakness : There is probably not enough of a contribution here in terms of machine learning methods for this paper to be appropriate for NeurIPS. The paper is rather narrow in scope and specific to the application of wood-leaf segmentation. The proposed rebalanced loss addresses the persistent challenge of class imbalance in real-world data, which has practical implications in various domains. By presenting at NeurIPS, we aim to raise visibility and engage with experts in the field. Additionally, our GVD preprocessing method offers a new perspective for handling spatial dependencies. We firmly believe our contributions will benefit the research community, with potential applications beyond our immediate focus. Our paper is in line with "Machine learning for sciences (e.g. climate, health, life sciences, physics, social sciences)" of NeurIPS inviting topics, and we believe our work aligns well with the conference's interdisciplinary focus. Plus, as you mentioned, PointNet++ has been employed not only in the study[1] but also by others[2]. However, none of these works have successfully applied this architecture to ULS data. Furthermore, we have undertaken targeted adjustments to the PointNet++ backbone for our specific context (see section B, Architecture of DL model, in our paper). (2) Eq. 7: if the voxels are adjacent, wouldn’t the Manhattan distance be 1 anyway? How is "adjacent" defined? Adjacent voxels are defined as voxels that share a common surface, if the voxel a and voxel b are adjacent, their Manhattan distance is 1. Consider voxel b and voxel c are adjacent, while voxel a and voxel c are not. In this case, the Manhattan distance (or geodesic distance in paper) between voxel a and voxel c is 2. What we want to calculate is the Manhattan distance between one of the lowest voxel, this voxel is fixed once it is chosen, and all the other voxels within the same component given by GVD. The voxel situated at the lowest position along the z-axis is called "lowest voxel." (3) Eq. 9: the loss doesn’t seem to be properly defined. When yk = 0, the loss term = 0? We will define it in a more proper way in the revised version. (4) Figure 5, 6: what do the colors mean here? The statement "any other approaches developed for dense point cloud are ineffective" is too broad. Red indicates high likelihood of being a wooden point, while blue indicates high likelihood of being a leaf point. The color gradient can be referred to in Figure 6’s color bar. Indeed, we will revise the statement in the final version. (5) typos problem We will correct all the typos accordingly in the revised version. References [1] Z. Xi, C. Hopkinson, S. B. Rood, and D. R. Peddle, “See the forest and the trees: Effective machine and deep learning algorithms for wood filtering and tree species classification from terrestrial laser scanning,” ISPRS Journal of Photogrammetry and Remote Sensing, vol. 168, pp. 1–16, 2020. [2] S. Krisanski, M. S. Taskhiri, S. Gonzalez Aracil, D. Herries, A. Muneri, M. B. Gurung, J. Mont- gomery, and P. Turner, “Forest structural complexity tool—an open source, fully-automated tool for measuring forest point clouds,” Remote Sensing, vol. 13, no. 22, 2021. --- Rebuttal Comment 1.1: Comment: I have read over the other reviews and the authors' responses. I would like to maintain my rating as I still think the level of contribution is below the bar for NeurIPS. --- Reply to Comment 1.1.1: Comment: Thanks for your additional feedback. We are sorry to read that you do not share our enthusiasm for this important topic and strongly believe that it would be important to attract more attention from the ML community to climate change related applications. In addition, we feel, we have provided response to your scientific and technical concerns. Are there any left issues that justify your decision to reject the paper?
Summary: This paper introduces a neural network model based on the Pointnet ++ architecture which makes use of point geometry only (excluding any spectral information). To cope with local data sparsity, it proposes a sampling scheme that aims to preserve local important geometric information. It also proposes a loss function adapted to the severe class imbalance. Experiments show that the proposed model outperforms state-of-the-art methods on UAV point clouds. Strengths: The paper applies the mature pointnet++ on lidar tree classification with some variations of the methodology. The results seems good and working on the low resolution UAV LIDAR point clouds. Weaknesses: I am not quite sure of the novelty part as most of the techniques seem mature technique. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Probably need to go beyond the pointnet++ methods which is relatively outdated. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The innovation part seems lacking. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your comments and advice. We appreciate the opportunity to address your concerns. Q1: Probably need to go beyond the pointnet++ methods, which is relatively outdated. In fact, the recent work by Qian et al.[1] at NeurIPS 2022 demonstrates PointNet++ backbone’s enduring relevance. They achieved substantial performance gains through training strategy adjustments without any architectural changes, which underscores the architecture’s continued efficacy. Through a small modification, they reinstated the performance of PointNet++ to a state-of-the-art level. This aligns with our approach, akin to the targeted adjustments in SOUL, suggesting the potential for significant performance improvements with subtle modifications. Another significant advantage worth highlighting is PointNet++ has smaller latency, fewer parameters, and lower memory consumption (see Table 6 in Choe et al.[ 2]). Latency significantly influences training speed, and due to the use of rebalanced loss, SOUL demands a big number of training epochs. Hence, PointNet++’s advantage of lower latency proves highly advantageous for our purposes. Indeed, a comparison extended to other DL methods with different architectures would be feasible, but we just did not have the resource to try that. (1) Limitations & Weaknesses: The innovation part seems lacking. We agree that our individual ingredients may be established techniques. However, combining them in a way resulting on satisfying results on such challenging data set, required a certain amount of expertise and design that resulted in a novel proposal. This is also confirmed by the absence of real competitors among methods addressing similar applications and datasets. As mentioned in the global response and in the response to reviewers yADz and bCLj, the innovative parts of our proposal lie in (1) a whole pipeline for the forest segmentation task with potential community impact, (2) GVD preprocessing for sparse forest data and (3) rebalanced loss function designed for unbalanced ULS forest recordings. We will try to better emphasize these aspects in our revisions. References [1] G. Qian, Y. Li, H. Peng, J. Mai, H. A. A. K. Hammoud, M. Elhoseiny, and B. Ghanem, “Pointnext: Revisiting pointnet++ with improved training and scaling strategies,” 2022. [2] J. Choe, C. Park, F. Rameau, J. Park, and I. S. Kweon, “Pointmixer: Mlp-mixer for point cloud understanding,” 2022.
Rebuttal 1: Rebuttal: We thank the reviewers for their comments and for underlining the existence of new and good features in our work, in particular regarding new methodological contributions, reproducibility and relevance of the experiments, while also suggesting improvement and clarification. We also note that the importance of forests segmentation, as an essential step to climate change and global warming understanding and mitigation, has been acknowledged. We summarize below the main points addressed in our response: (1) GVD ablation study: Further elaboration on the benefits of the GVD method is warranted in our paper. The ULS data often covers several hectares, or even hundreds of hectares. The situation in tropical forests is also highly complex, with various types and sizes of trees densely packed together. The significant memory demands make it nearly impossible to process all the data at once, leading us to adopt a spatial split scheme approach as a necessary compromise. We can select data randomly from the whole scene, but selecting data randomly can result in a sparse and information-poor sample. An alternative is to employ a divide and conquer strategy to handle the chaotic, big volume, and complex ULS data. That's why we propose GVD, a method that involves breaking down the data into more manageable subsets (refer to Figure 1 in rebuttal_figures.pdf), allowing us to handle the intricacies and extract meaningful insights in a more systematic manner. This approach enables us to retain the information-rich aspects of the data while overcoming computational challenges associated with the sheer volume of data. Prior to employing the GVD method, we initially adopted a more intuitive approach. The data was partitioned in unit of voxel, serving as component for batch preparation through down-sampling. However, this approach gives rise to border effects, particularly impeding SOUL's focus on the meticulous segmentation of intricate branch and leaf within tree canopy. The segmentation of cubes led to the emergence of noise point cluster along the voxel edges. ULS have more recording on tree canopy, so the presence of noise point cluster on voxel edges is more severe on tree canopy, which imposes bigger obstacles to achieving our goal. We experimented with cuboids, a choice aimed at preserving greater semantic information within each component sample and expanding the spatial range for batch selection. Similar to a "sliding window", we can systematically traverse the entire forest with overlapping coverage in this way. But border effects persisted (see Figure 2 in rebuttal\_figures.pdf), prompting the introduction of the GVD method, which led to a substantial improvement. A comparison between the result of the downgraded version using sliding window as spatial split schema and the individual-component point performance of full version SOUL is illustrated (see Figure 1 in rebuttal\_figures.pdf). Notably, in component N°552, SOUL effectively discriminate trunk points from leaf points that were previously entirely intertwined, surpassing our initial expectations. So, in alignment with the observations of Reviewer bCLj, we hold the conviction that the proposed GVD preprocessing method is relevant beyond the confines of the present study, extending to various LiDAR sensors and diverse forest types. A new video (3D version of Figure 2 in rebuttal_figures.pdf) demonstrating the use of cuboid as a spatial split method has been provided to AC. (2) Novelty: Our main contribution lies in the complete handling of the full pipeline from data collection to final segmentation. To our knowledge, this is quite unique in forestry applications, where either datasets are limited to much simpler structures, or analysis techniques show limited performance. In this work, we combine advanced and original deep learning methods and principles to get what is a first baseline on segmentation of ULS LiDAR data. Plus, our model SOUL exclusively depends on coordinates and does not incorporate RGB information, which is different from indoor scenes or autopilot contexts. (3) Relevance to NeurIPS: Our work makes use and proposes new developments and contributions in core deep learning techniques. Although it focuses on point cloud processing, we believe that some of the principles and ideas are transferable to other data structures and problems. As regards the target application, according to this year call for papers, the conference scope includes explicitly life and natural sciences. In addition, in contrast to some of the reviewers, we believe the topic of our work may be of interest to the NeurIPS readership, especially as mentioned above, for its potential high environmental applications impact, which is a rising concern for a lot of junior and senior researchers. In addition, to foster collaboration and facilitate meaningful exchange within the research community, we have made the decision to openly share our code and data, hoping to engage with fellow researchers in a more inclusive and productive manner on these important applications. (4) The video provided before: Just in case, a video hyperlink has been embedded within the closing word of the first paragraph in the supplementary. This video shows the qualitative result of SOUL model and some comparisons between different methods. Pdf: /pdf/66cf9baa0da7308b4e5874b43494def600792ddc.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: This submission proposes an end-to-end approach, Semantic segmentation On ULS (SOUL), for leaf-wood semantic segmentation that is based on PointNet++ [8]. By considering the imbalanced class label in the collected ULS dataset, a rebalanced loss is used. Moreover, a geodesic voxelization decomposition (GVD) method is introduced for data refinement through pre-partition. A ULS dataset with 282 tree-labels is collected for network training and testing. Experiments on the collected dataset demonstrate the effectiveness of the proposed method compared with the chosen baselines. Strengths: This submission proposes an end-to-end approach, Semantic segmentation On ULS (SOUL), for leaf-wood semantic segmentation that is based on PointNet++ [8]. By considering the imbalanced class label in the collected ULS dataset, a rebalanced loss is used. Moreover, a geodesic voxelization decomposition (GVD) method is introduced for data refinement through pre-partition. A ULS dataset with 282 tree-labels is collected for network training and testing. Experiments on the collected dataset demonstrate the effectiveness of the proposed method compared with the chosen baselines. The strengths are: 1) A new dataset has been collected. 2) A new approach with comparable results. Weaknesses: The weaknesses of this paper are listed as follows: 1) The writing and the organization of the submission need to be improved. 2) The benefits of using the data pre-partitioning (in Section 3.2) are not clear. It would be better to provide more details and an ablation study w/o the GVD method. 3) The details of the provided baselines are missing. It would be better to consider more baselines in Table 1, e.g., PointNet++ with the proposed sub-modules. 4) The novelty is not sufficient for NeurIPS standards. 5) There are lots of approaches for imbalanced data labels. It would be better to provide more experiments to demonstrate the effectiveness of the rebalanced loss used. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: See Weaknesses. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: See Weaknesses. Flag For Ethics Review: ['No ethics review needed.'] Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your valuable and constructive comments, we appreciate this opportunity to respond to your comments and address your concerns. Q1: The writing and the organization of the submission need to be improved. We will improve both in the revised version and check carefully and correct all typos. Q2: The benefits of using the data pre-partitioning (in Section 3.2) are not clear. It would be better to provide more details and an ablation study w/o the GVD method. Further elaboration on the benefits of the GVD method is warranted in our paper. ULS data often covers several hectares, or even hundreds of hectares. The situation in tropical forests is also highly complex, with various types and sizes of trees densely packed together. The significant memory demands make it nearly impossible to process all the data at once, leading us to adopt a spatial split scheme approach as a necessary compromise. We can select data randomly from the whole scene, but selecting data randomly can result in a sparse and information-poor sample. An alternative is to employ a divide and conquer strategy to handle the chaotic, big volume, and complex ULS data. That's why we propose GVD, a method that involves breaking down the data into more manageable subsets (refer to Figure 1 in rebuttal_figures.pdf), allowing us to handle the intricacies and extract meaningful insights in a more systematic manner. This approach enables us to retain the information-rich aspects of the data while overcoming computational challenges associated with the sheer volume of data. In our global response to all the reviewers, we have introduced an alternative spatial split scheme called the "sliding window". This method involves dividing the entire scene into multiple overlapping cuboids. Within each cuboid, we select training, validation, and testing samples. While this isn't a strict ablation study of GVD, this allows us to observe the clear impact of border effects and the disruption of spatial information in point cloud data (refer to Figure 2 in rebuttal_figures.pdf). This demonstrates the advantages of the GVD method. Q3: The details of the provided baselines are missing. It would be better to consider more baselines in Table 1, e.g., PointNet++ with the proposed sub-modules. In fact, SOUL is the first approach to tackle semantic segmentation of tropical forest ULS data. SOUL might serve as the first baseline in this field. We have introduced both the best-performing unsupervised method (LeWoS) and a deep learning approach (FSCT) tailored for TLS data. The latter are commonly recognized as the best alternatives in the context we are dealing with (semantic segmentation of forest point clouds). These methods do not perform well on ULS data, mainly because they were not specifically designed to handle the raw ULS data and address class imbalance issues. Our qualitative evaluation has demonstrated that our methods represent the state-of-the-art in terms of performance. It is worth noting that access to code is limited in this domain, hindering reproducibility and comparative analysis. Moreover, certain methods rely on intensity data, which can vary based on different devices. Consequently, the generalizability of testing results remains uncertain. That's the reason why we aim at using solely coordinates as input of SOUL. To foster collaboration and facilitate meaningful exchange within the research community, we have made the decision to openly share our code and data, hoping to engage with fellow researchers in a more inclusive and productive manner. Q4: The novelty is not sufficient for NeurIPS standards. We appreciate the reviewer's feedback and understand their concern regarding the novelty of our work. Nonetheless, we would like to highlight that our research does indeed bring valuable contributions to the field. The proposed method GVD is also relevant for other LiDAR sensors and type of forests and the rebalanced loss addresses the challenge of class imbalance in real-world data, which may have practical implications in various domains. As mentioned in the global response and in the response to reviewers yADz and bCLj, the innovative parts of our proposal lie in (1) a whole pipeline for the forest segmentation task with potential community impact, (2) GVD preprocessing for sparse forest data and (3) rebalanced loss function designed for unbalanced ULS forest recordings. We will try to better emphasize these aspects in our revisions. Q5: There are lots of approaches for imbalanced data labels. It would be better to provide more experiments to demonstrate the effectiveness of the rebalanced loss used. We have provided a more comprehensive comparison between the focal loss and the rebalanced loss in our global response to all reviewers and in the final version. Additionally, we will publish our code and data, hoping for further interactions with researchers in the field. --- Rebuttal Comment 1.1: Comment: Thank you for the authors' responses. We acknowledge the dataset, the main contribution of the submission and should be highlighted in their main paper, might have a potentially high impact on reforestation/afforestation monitoring. However, the technique novelty does, in my view, not meet the bar of Neurips as the method is more likely an approach that borrows existing ML techniques for a specific task. Unless the technical details can be compared with SOTA baselines on other public datasets and with a proper ablation study, I will maintain my rating. --- Reply to Comment 1.1.1: Comment: Thank you for your feedback. While we understand your concerns, we would like to emphasize the difference of our application context compared to existing well-known datasets. Unlike point cloud data of mock-up generated objects, indoor scenes, and autopilot datasets, large-scale raw forest point cloud data exhibits sparsity, irregularity, and heterogeneity. An experienced researcher in this domain cannot ensure accurate classification of wood and leaf points even in artificial way. Considering the results given by SOUL in comprehensive testing across various analogous datasets: UAV data (ULS), ground-based equipment data (TLS), and backpack data (MLS), we share reviewer yADz's viewpoint regarding the ablation study aspect: "...the technical details and the proposed loss need to be carefully examined as an ablation study using the widely-known datasets, such as S3DIS. However, since this paper typically focuses on the 3D semantic segmentation in forest scenes, I believe that such strict analysis looks redundant to judge the novelty of this paper."
null
null
null
null
null
null
Continual Learning with Global Prototypes: Beyond the Scope of Task Supervision
Reject
Summary: This paper studies continual learning in NLP by leveraging global prototypes. The authors attribute the catastrophic forgetting to the disruptive updates caused by the misalignment between the knowledge learned from observed tasks and the knowledge required for future tasks. To tackle this problem, the authors propose NeiAttn which derives global prototypes and learns proper relationship between the prototypes and data representations for each task. Experiments show that models learning data representations well related to global prototypes can induce less catastrophic forgetting and NeiAttn outperforms baselines in both task-incremental and class-incremental learning setting. Strengths: 1. The paper and proposed method are well-motivated. The paper reveals a misalignment between the knowledge learned from observed tasks and the knowledge required for future tasks. This is a core issue in continual learning, especially in the continual learning with pre-trained models. The learning objective introduced in Line 154-156 is clear. 2. The paper considers the property of pre-trained language models (discussed in Line 175-178). Personally, I think this perspective is important as pre-training is very common in building machine learning system but previous literatures in continual learning seldom consider the property of pre-trained models. 3. The experimental results show the effectiveness of NeiAttn. Weaknesses: 1. The writing of the proposed method is not clear. Based on my understanding, the core of the proposed method lies in Equation 9, though there are many related contents in Section 3 and Section 4. It would be better if there is an overview of your proposed method given by pseudo code or illustration; or, it may be clearer to introduce your method in a top-down way. 2. NeiAttn outperforms Prompt Tuning marginally according to Table 1 and Figure 4. Based on the motivation and desiderata introduced by this work, I'm wondering if you could apply the training objective in Equation 3, 4 to the Promt Tuning framework. Will it give you better results and simpler method than the current NeiAttn? 3. In Appendix F, the authors give the ablation results of number of neighbor attention layers. The results show that fewer neighbor attention layers give better CL results. If all layers use Neighbor Attention, the results are very bad. Then what's the benefit of using Neighbor Attention for continual learning? Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: 1. How does the selection of $\alpha_\tau$ influence the results? The paper only mentions "We set $\alpha_\tau$ to make NeiAttn and PT2 satisfy Eq.(4) while Adapter and FT fail to". Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: The paper mentions some limitations in Section 7. Flag For Ethics Review: ['No ethics review needed.'] Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear reviewer, thank you for the time and effort you have dedicated to evaluating our work. We are glad that you think our perspective in the paper is important. We address your concerns and questions below. **1\. Points of our method** Thanks for the suggestion. We would like to illustrate the flow related to our method: - In Section 3.1, we propose to learn representations related to global prototypes in addition to the conventional classification objective. This leads to our desiderata in the main paper Section 3.1 (Line 154-160). This novel desiderata is the key contribution in our paper, and we derive specific models to realize this desiderata. - In Section 3.2, we show how to get global prototypes from models pre-trained by MLM. Based on this, it is possible to adapt a pre-trained model to realize our desiderata instead of explicitly training for Eq. (4), which requires human-annotated rationale tokens (unavailable for most of the datasets). This is the foundation of Section 4, explaining why we study different adaptation structures for our desiderata. - Section 4 studies different adaptation structures. It is intuitive that not all adaptations can satisfy our desiderata, and there may be multiple adaptations to satisfy our desiderata. We study existing adaptation structures, and design structures (Eq. (9)) that may better realize our desiderata. We conclude our contributions are **(1)** proposing a novel desiderata guided by global prototypes, which can improve CL performance; **(2)** finding that pre-trained models can provide such global prototypes and **(3)** introducing an adaptation structure that satisfies our desiderata. Through the experimental study, we find our proposed NeiAttn and existing PT2 both support our claim in (1) and both of them perform well in CL experiments. We will make our illustration clearer in the paper. **2\. Performance of NeiAttn** We would like to clarify a point that may lead to a misunderstanding here. We do not explicitly train the model using Eq. (4), because the annotated rationales are not available for most of the tasks and data. Instead, we look for adaptation structures to satisfy our desiderata. We evaluate different models’ ability to satisfy our desiderata in Main Paper Figure 3. Results show that besides our proposed NeiAttn, another adaptation model PT2 also satisfies our desiderata. And the CL experiments in Main Paper Table 1 and Figure 4 show these two models both perform well. These results validate our claim that the adaptation model satisfying our desiderata can perform well in continual learning. For the simplicity of these two methods, Prompt-based methods contain fewer parameters but are not easy to train [1,2]. Our NeiAttn requires less time for training and can be further developed for better efficiency with comparable parameters to PT2 (see details below about NeiAttn-LR). **3\. Study on layers of NeiAttn used** **Why we do not use NeiAttn on all layers**: Our neighbor attention is designed under the attention mechanism and can have an over-smoothing issue [3], where all neighbor representations become the same after several layers (Main Paper 242-243). If over-smoothing happens, the neighbor attention block actually serves the same as a regular linear layer, which is more likely to overfit data and deviate from the pre-trained knowledge. We observe that the over-smoothing of NeiAttn can happen after 4-5 NeiAttn layers. So adding neighbor attention on more layers may lead to worse performance in continual learning. However, when learning harder tasks, we may need models with more capacities/parameters, where adding NeiAttn to more layers may help. **Reasons to use NeiAttn**: - Compared to PT2, NeiAttn has better capacity when incorporating replay-based methods (Main Paper Table 1). Also, it requires less training time (\~5 epochs per task) compared to PT2 (\~20 epochs per task). - The unstable performance of neighbor attention on all layers can be simply improved with parameter efficiency designs. Specifically, we can use low-rank linear layers instead of the fully connected ones, which are used in Adapters and other parameter efficiency models [2]. This will give more robust performance when adding neighbor attention to all layers and have extra efficiency benefits. We show the results of injecting NeiAttn to all layers with and without low-rank (denoted as NeiAttn (all) and NeiAttn-LR (all)) below: | | | Yahoo-1 | | DB | | News Series | |-----------------|-----------------:|--------------:|---------------:|--------------:|----------------:|----------------:| | Model | Acc | Forget | Acc | Forget | Acc | Forget | | NeiAttn-LR(all) | 89.40$\pm$0.76 | 2.54$\pm$0.80 | 99.72$\pm$0.13 | 0.17$\pm$0.15 | 70.63$\pm$7.13 | 10.34$\pm$6.52 | | NeiAttn(all) | 85.49$\pm$5.27 | 6.2$\pm$5.24 | 95.53$\pm$5.52 | 4.34$\pm$5.51 | 63.98$\pm$12.12 | 15.85$\pm$10.52 | The low-rank operations may also benefit NeiAttn in the original settings (NeiAttn-LR). Results are shown in Tables 2 and 3 in the rebuttal PDF. **4\. Clarification on $a_\tau$** Since we do not explicitly use Eq. (4) for training, the $a_{\tau}$ is only used in the evaluation, as a threshold to decide whether a model can satisfy the desiderata of global alignment. According to Figure 3, there is a clear gap in global alignment performance between (PT2, NeiAttn) and Adapters. So we select the threshold that can distinguish such performance differences. We will make it clearer in the paper. **Reference** [1] He et al., Towards a Unified View of Parameter-Efficient Transfer Learning, ICLR 2022 [2] Hu et al., LoRA: Low-Rank Adaptation of Large Language Models, ICLR 2022 [3] Shi et al., Revisiting Over-smoothing in BERT from the Perspective of Graph, ICLR 22
Summary: This paper focuses on continual learning in NLP and introduces a regularization-based method to tackle the problem. The main contributions of this work include global alignment, highlighting general-purpose knowledge across tasks, and neighbor attention, which offers a novel parameter-efficient tuning approach. Experimental results on both task-incremental and class-incremental learning scenarios demonstrate the effectiveness of the proposed method. Strengths: The paper is well-written with smooth flow. It addresses two popular continual learning (CL) settings effectively. The proposed intuition aligns well with LLM and makes logical sense. Weaknesses: 1. The underlying idea is built upon the assumption that LLM contains general-purpose knowledge and that learning tasks should not deviate too far from it. This idea shares similarities with other regularization methods and may suffer from similar drawbacks, such as potential negative impact on new task performance and insufficiency of soft regularization. From this perspective, it is not clear how this paper addresses the problem in a way that other regularization methods cannot. 2. Table 1 lacks inclusion of several CL NLP baselines, (there is an extensive survey in [1]). Only MBPA, published in 2018, is listed for NLP. It is suggested to consider adding more CL NLP baselines, such as [2] (which you have cited) and other latest work mentioned in the survey. [1]: Continual Learning of Natural Language Processing Tasks: A Survey. https://arxiv.org/abs/2211.12701 [2]: Achieving forgetting prevention and knowledge 443 transfer in continual learning, NeurIPS 2021 Technical Quality: 3 good Clarity: 3 good Questions for Authors: I think this paper has good potential. I would be willing to revise my score if they address the concerns I have raised. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: See weakness 1 Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear reviewer, we sincerely thank you for thoroughly evaluating our work and raising insightful questions. We address your concerns and questions below. **1\. Our regularization vs. others** Compared to other regularization methods, we regularize data representations’ deviation from the space of global prototypes, rather than the model parameters’ deviation or other representational deviation between tasks. Specifically, - Our model contains a pre-trained LM and trainable adaptation blocks. We do not limit trainable parameter deviations during task learning. This gives flexibility in learning new tasks while keeping referencing the pre-trained model for general knowledge. - We also do not constrain the deviation of representations between tasks. Our regularization tries to build task-specific representations related to the shared global prototypes. Specifically, representations of different data should be close to different global prototypes that are related to corresponding task predictions. This allows representations to be adequately diverse across (seen and unseen) tasks while staying connected via global prototypes. We carefully design the adaptation block to further address the balance between task performance and regularization strength. Specifically, - We design a trainable structure that is easy to be adapted for task performance. - We control the mix-up of representations from the trainable branch and the pre-trained branch for the regularization effect. - We increase the model’s task learning capacity by extra token information, while restricting that information to be within the data’s neighborhood for regularization of global alignment. **2\. More NLP baselines** Thanks for the suggestions. We add two NLP baselines IDBR [1] and CTR [2] in Table 2 and 3 in the rebuttal PDF. Results show that our NeiAttn has overall better performance than these baselines: - IDBR works well for task incremental scenarios while performing worse in class incremental scenarios without replay. It also tends to have more representation drift as shown in Figure 1(b) in the rebuttal PDF. - CTR is good at tasks requiring knowledge transfer while tending to have forgetting in tasks with little knowledge to share. And it is not applicable to class incremental scenarios. In comparison, our NeiAttn achieves a better balance among different tasks and scenarios. **References** [1] Huang et al., Continual Learning for Text Classification with Information Disentanglement Based Regularization, NAACL 2021 [2] Ke et al., Achieving Forgetting Prevention and Knowledge Transfer in Continual Learning, NeurIPS 2021
Summary: The authors address the problem of catastrophic forgetting in continual learning and propose to connect observed and unknown tasks by means of task-specific data representations which can be seen as general-purpose representations useful for a wider range of tasks. To this end, they introduce the notion of global prototypes which can be pre-learned and reflect data semantics. Based on these they learn more task specific representations using an objective that trades off two losses (classification loss and prototype loss) . In experimental verifications of their ideas, they consider NLP tasks and find that catastrophic forgetting can successfully be reduced and their neighbor attention model achieves better performance than previous baselines. Strengths: The broad ideas of this paper are easy to follow. The overall idea for how to mitigate catastrophic forgetting appears simple, sound, and solid. The idea of considering transformers with neighbor attention, too, is simple yet compelling. Experimental evaluations appear to be rigorous and comprehensive; results reveal favorable characteristics of the proposed framework. Weaknesses: At points there are concerns regarding technical details. Certain statements appear to be handwavy. Sentences such as „In practice, we add neighbor attention to less than half of the transformer layers and leave the last layer untouched for guidance.” or „In continual learning, the optimal layer selections for different tasks may vary.“ could need more elaboration. Throughout, several hyperparameters are introduced (K, \alpha, \beta, …) which apparently have then been set in a heuristic manner. The contribution would be stronger if a (mathematical) reason for these choices had been given. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: What does it mean to say „… and leave the last layer untouched for guidance.“ ? Indeed, the concept of “task-specific guidance” seems to be of importance for this work but is not made precise. In lines 149-150, the ansatz for the reference probability distribution is given and motivated via references [8], [16] and [36]. This connection does not become clear enough. Where do the rational tokens r_\tau come from? The discussion in lines 173-182 does not clarify this point. How is parameter K in line 234 chosen? Is it set to 20 (as one may guess from line 250)? If so, what motivates this choice? Do other choices lead to different results? A similar question applies to the statement “in this paper, we set \alpha = \beta = 0.2” (line 239). Some motivation for this choice would strengthen the paper. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: The authors openly address limitations of their proposed approach (increased memory requirements and additional need for hyper-parameter tuning). There are no concerns w.r.t. to negative societal impact of this work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear reviewer, we appreciate the time and effort you have dedicated to evaluating our work. It is inspiring to see you find our method interesting. We address your concerns and questions below. **1\. Elaboration on neighbor attention insertion** Based on the desiderata proposed in Section 3, all our model designs aim to increase the model’s task learning ability without sacrificing the global alignment. Consider each output $o$ of self-attention as: $o = \sum_{i} \lambda_i f(h_i)$ with $\lambda_i \geq 0$ and $\sum_i\lambda_i = 1$, where $h_i$ is the hidden representation at $i$-th position and $f$ is a linear transformation. It searches task-specific information in a convex hull of $f(h_i)$. To improve the model’s expressivity, we add the neighbor attention module to expand such search space with the extra range of neighborhoods. However, NeiAttn faces the over-smoothing issue, where all neighbor representations become the same after several neighbor attention layers [1]. If over-smoothing happens, the neighbor attention block actually serves the same as a regular linear layer on the neighbor representation, which is more likely to suffer from overfitting and deviate from the pre-trained knowledge. We observe that the over-smoothing can happen after 4-5 neighbor attention layers, and thus inject neighbor attention only to half of the attention layers. **2\. Optimal NeiAttn layer selection** In continual learning, tasks may have different difficulties and require different model capacities. Simple tasks may need very light adaptation. In this case, adding extra adaptation blocks to more transformer layers may increase the risk of deviating from the pre-trained knowledge and losing global alignment. On the other hand, hard tasks may require stronger adaptations, and injecting adaptation blocks to more transformer layers may give better performance. We will add more elaborations in the paper. **3\. Leave the last layer untouched for guidance** First, we would like to summarize the overall structure of our model: we fix the pre-trained transformer, and add trainable adaptation blocks on selected transformer layers. After that, a classifier is added for label prediction. Based on this overall structure, we do not add the adaptation block to the last transformer layer because the updating (parameters’ gradients) of it will mostly be influenced by the following classifier, which is not pre-trained and purely learned from tasks. Such updating lacks the guidance of the pre-trained knowledge and is less meaningful to our desiderata. **4\. The reference probability and rationale tokens** Sorry for the confusion. In Eq. (2), the prototype loss requires to represent data’s task-specific information related to global prototypes, i.e. at the token level. The original numeric label $y_\tau$ is not related to global prototypes. In this case, we should find specific tokens that contain task information for each data. We may have multiple choices to represent such token-level task information. However, based on the paper [2], which suggests preserving data’s holistic information for future task learning, we select tokens that contain information from the original data, and also relate to task predictions. Rationale tokens are good sources of such information. They are extracted from the data tokens, but are related to task predictions. An example of rationales is shown in Supplementary Material D.2. Table 1. The rationale tokens can be obtained from human annotators, which are however not available for most datasets. In this paper, we use adaptation models to implicitly achieve the effect of global alignment, instead of explicitly training for Eq. (4). In this case, rationale tokens are not required during our training. But we do use a dataset with annotated rationales for evaluation of the global alignment ability (Main Paper Figure 3). **5\. Hyperparameter selection** We set hyperparameters ($K$, $\alpha$, $\beta$, …) to control the initial range of the neighborhood. As described above, we want to increase the model’s expressivity by expanding its search space to the neighborhood of the data. The neighborhood can not be too large, otherwise the learned information may deviate far away from the data; and it can not be too small, which may cause a loss of expressivity. - "*How is parameter $K$ in line 234 chosen? What motivates this choice? Do other choices lead to different results?*" $K$ is set as 20 for experiments in the main paper. $K$ decides the range of neighbor selections. When retrieving neighbors of tokens, we find that the nearest neighbors usually include many variations of the original token (e.g. ‘prepares’ and ‘prepared’ are nearest neighbors of ‘prepare’), which do not contain much extra information. To make extracted neighbors contain some extra information, we pick 5 neighbors out of $K=20$ nearest neighbors for neighbor attention. We have provided an example of neighbors in Supplementary Material D.3 Table 2. - "*Motivation of $\alpha$ and $\beta$*". $\alpha$ and $\beta$ control the initial range of neighbor representations. As analyzed before, it should be on an adequate scale. Those hyperparameters are selected mainly by experiments. We provide the ablation on the influence of these hyperparameters in Supplementary Material F.1. **References** [1] Shi et al., Revisiting Over-smoothing in BERT from the Perspective of Graph, ICLR 22 [2] Guo et al., Online continual learning through mutual information maximization, ICML 22 --- Rebuttal Comment 1.1: Comment: Thank you for your clarification.
Summary: This paper proposes a continual learning method for both task and class incremental learnings by incorporating global prototypes. These global prototypes are derived from a pre-trained masked language model and are used to make connections with task specific prototypes. By maintaining these connections, the proposed method prevents task knowledge from forgetting. The authors additionally introduce a trainable module called AttnNei for multi-head attention. The proposed method is evaluated using several existing CL methods and the performance is compared with different adaptation models on BERT-base. Strengths: 1. The overall approach of using global knowledge and task-specific knowledge to prevent forgetting is interesting. 2. The proposed method is applicable to existing CL methods. Weaknesses: 1. I found it difficult to comprehend how and why Eq.2 establishes a connection between global knowledge and task-specific knowledge, leading to the prevention of forgetting. 2. The experiment results do not present good advantage of the proposed method over existing methods due to the following reasons: i) The baselines are too old to know how much improvement it would make when it’s applied to more recent methods. ii) The experiment only demonstrate improvements for CL methods whose original models do not leverage adapters or prompts. However, there re CL methods that already incorporate prompts or adapters [1, 2]. To provide a broader perspective, the authors should compare their method with these approaches. iii) According to Fig 3 and Tab 1, PT2 appears to be comparable in performance, while requiring fewer resources as indicated in Tab 2 3. This method is only applicable to NLP tasks, given that the global prototypes are obtained from language models. Most continual learning methods are designed to be general and not limited to specific task types such as vision or text. Therefore, a discussion on how to apply this method to non-language tasks should be included. Misc. Typos: Rationle in line 251. preservingholistic in line 81. [1] Wang et al. Learning to Prompt for Continual Learning. CVPR, 2022 [2] Kim et al. A multi-head model for continual learning via out-of-distribution replay. CoLLAs, 2022 Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: 1. NeiReg is proposed, but its performance is not compared in Tab 1 and Fig 4. 2. Does the network size expand for each task in the learning process? If it does, how much does it increase at each task and what’s the final size of the model after learning all the tasks? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: Refer to Weaknesses and Questions Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear reviewer, thank you for thoroughly evaluating our work and raising insightful questions. We address your concerns and questions below. **1\. The connection between global knowledge and task-specific knowledge in Eq. (2)** Sorry for the confusion. In Eq. (2), the global knowledge is provided by proto[$v’$] in the denominator, which is the global prototype of the unit $v’$ (i.e. token) shared across all tasks. proto[$v’$] is pre-learned by masked language modeling (Eq. (5)) to reflect task-general connections among all units $v' \in V$. The task-specific knowledge is provided by $p(v|x_\tau, y_\tau)$, $v \in V$. Specifically for NLP, we set $p(v|x_\tau, y_\tau)>0$ if $v$ is one of rationale tokens of data $x_\tau$. The rationale tokens are tokens in the data that are essential for task predictions (main paper Line 147-150). Generally speaking, instead of the numeric class label $y_\tau$, we describe the task-specific knowledge over global units $v$ (i.e. in the token level). And we encourage the learned representations to be related to prototypes of those global units. Since the global prototypes (proto[$v’$]) are pre-trained and shared across all tasks, this builds the connection between global knowledge and task-specific knowledge. An illustrative example is provided in Supplementary Material D.1 Figure 6. **Why it can prevent forgetting**: The original classification loss in Eq. (1) learns knowledge only from task supervisions. Specifically, it is guided by class prototypes ${w}^c_\gamma$ which are learned separately for each task, without considering the connections to future tasks. Thus, the knowledge learned with Eq. (1) may not generalize to future tasks. After updating the model for a new task, the new knowledge can interfere with previous knowledge, causing previous task representations to abruptly change (main paper Figure 1(a)). In Eq. (2), we guide each task’s learning with global prototypes. By doing so, representations learned from seen and unseen tasks can align with each other. This may reduce the abrupt representation change and thus reduce forgetting. **2\. SOTA Baselines** i) “*The baselines are too old*”. Our method does not use experience replay, information of previous tasks, or dynamic structures. We have already compared with recent adaptation baselines (PT2, 2021) and replay-based baselines (ER-ACE, 2022 [6]). Results have shown the improvement of our method to those baselines. We also add more baselines in Tables 2 and 3 of the rebuttal PDF. ii) "*CL methods that already incorporate prompts or adapters*". We additionally compare L2P[1], CTR[3] as suggested. Since the suggested model in [2] requires replay and OOD data detection besides the adapter model, we compare to another adapter-based CL model CTR for fair comparison. Results are shown in Tables 2 and 3 in the rebuttal PDF. With prompts only injected in the inputs, L2P has insufficient capacity in single tasks on Bert-base, which may lead to inferior performance in task-incremental learning. CTR, which is designed especially for knowledge transfer, performs well in News Series while tending to have more forgetting in tasks requiring little knowledge transfer. NeiAttn achieves overall better performance than baselines. **3\. Parameter efficiency and computing resources** We would like to point out that fewer parameters in PT2 do not mean fewer computations. Prompt-based methods are hard to train [7]. For PT2, it requires 20 epochs for each single task to converge, while NeiAttn only needs 5 epochs. With additional prompts (~50), PT2 also requires additional computing memory in calculating self-attentions. Since our paper is not for parameter efficiency, we do not design the model with reduced parameters. However, we can achieve parameter efficiency by simply using low-rank linear layers instead of the fully connected ones, which are generally used in parameter efficiency methods like Adapters. We show the results of low-rank NeiAttn (NeiAttn-LR) in Table 2 and 3 in the rebuttal PDF. NeiAttn-LR has comparable parameters to PT2, while preserving the strong CL performance of NeiAttn. Last but not least, we want to point out that PT2 also satisfies our desiderata for task performance and global alignment. Its success in CL also supports our key claim of this paper. **4\. Broader application of proposed approach** Thanks for the inspiring question. We believe our main contributions on global alignment and the utilization of adaptation models are general to CV tasks. To apply to CV tasks, the keys are to 1. get the pre-trained model with global prototypes, which can be those trained by generative self-supervised learning like [4]; and 2. find proper level adaptations for global alignment like [5]. In addition, we believe our methods’ applications to NLP can strengthen CL research in the NLP domain. Nowadays, large language models and their adaptation models are shown to have good properties in single-task learning. However, for CL, existing works built on pre-trained models/LLMs mainly take advantage of the parameter efficiency in adaptation models, while we show that the global knowledge contained in LLMs can also benefit CL. This may shed light on more CL methods using pre-trained models. We will add a discussion to the paper. **References** [1] Wang et al. Learning to Prompt for Continual Learning. CVPR 2022 [2] Kim et al., A multi-head model for continual learning via out-of-distribution replay, CoLLAs 2022 [3] Ke et al., Achieving Forgetting Prevention and Knowledge Transfer in Continual Learning, NeurIPS 2021 [4] He et al., Masked Autoencoders Are Scalable Vision Learners, CVPR 2022 [5] Zhang et al., Adding Conditional Control to Text-to-Image Diffusion Models, 2023 [6] Caccia et al., New Insights on Reducing Abrupt Representation Change in Online Continual Learning ICLR 2022 [7] He et al., Towards a Unified View of Parameter-Efficient Transfer Learning, ICLR 2022 --- Rebuttal Comment 1.1: Title: Response to the author comments Comment: Thank you for the efforts in addressing my concerns. After reading other reviews, I've decided to raise my score from 4 to 5.
Rebuttal 1: Rebuttal: We sincerely thank all reviewers for their insightful comments and suggestions. We have added new experimental results in the Rebuttal PDF with **(1)** CL baselines using prompt or adapter structures and **(2)** more CL in NLP baselines. Results show that our proposed methods can still achieve advanced results in the overall CL experiments. For specific questions, we respond to each reviewer respectively. We list the materials used in our rebuttals below: - Main Paper: the main paper submitted. - Supplementary Material: the supplementary PDF submitted with the main paper, including examples and ablations. - Rebuttal PDF: the one-page PDF attached in this general response including our additional experiments. Thank you for the time to consider our rebuttals. Pdf: /pdf/5f4dc035013f19d0d15bd4791eb2ddcf4b8ee53a.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: This paper addresses the continual learning problem by leveraging a concept called global prototypes, which are invariant features that are not altered during task-specific continual learning. The training is thus augmented with an additional alignment loss of the data features to the prototypes. The paper then motivates that adapter-like parameter efficient fine-tuning (PeFT) are prototype aligned. Experimental-wise, the paper conducted studies on different PeFT fine-tuning methods for continual learning and showed that their proposed light-adaption NeiAttn and an existing PeFT method PT2 is significantly better than others. Strengths: The motivation of the paper is clear and the experiments conducted are extensive. The overall presentation is clear. Figures are nice and illustrative. Weaknesses: 1. The paper is dense and while the authors have tried to illustrate it, I still find some parts less convincing. Mostly, I feel like there is simple argument to summarize Section 3: continual learning should not deviate from previous task learned parameters much. This seems to justify all the later experiments and analysis. The current Section 3 draws an abstract component of global prototype, but only realizes it as fixed model parameters, which I feel is redundant. Maybe I missed a point in the paper though, happy to be corrected. 2. There is a lack of measurement on how an adapter module is closer to the original model (closer to the prototype), the paper arrived at a conclusion that PT2 and their NeiAttn are better, but there doesn't seem to be a strong reason behind. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: 1. This might be my misunderstanding of the paper, but I absolutely can't parse sentences 300-301, what do you mean by setting a_tau? Was equation (4) actually used in training, or from my understanding, a concept of global prototype that is desired to achieve, but not explictly used in training? Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: some limitations were discussed about the requirement of hyperparameters for their proposed NeiAttn. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear reviewer, we thank the time and effort you have dedicated to evaluating our work. We address your concerns and questions below. **1\. Clarification on Section 3** We would like to clarify the misunderstanding in the statement “there is a simple argument to summarize Section 3: continual learning should not deviate from previous task learned parameters much”. We highlight our important designs below: - **The use of the fixed model.** We use the fixed model as a common basis to align with global prototypes, rather than forcing model parameters to not deviate from it. Specifically, the fixed model only contains task-general knowledge, and we have to sufficiently adapt that knowledge for specific tasks by learning additional adaptation blocks (e.g. NeiAttn). The learning of adaptation blocks takes a reference to the task-general knowledge (fixed model), but we do not limit deviation of parameters in the adaptation blocks. This means those parameters can still have (large) updates to learn a task if necessary. - **Not all models can be our fixed model.** Importantly, not all models can be our fixed model with general knowledge aligned with global prototypes. As analyzed in Section 3.2, we can directly use the pre-trained LM because it is pre-trained via masked language modeling (Eq. (5)), which learns model parameters with global prototypes desired in Eq. (3) (i.e. proto[$v$] = $w_\delta^v$). If a fixed model does not contain such global knowledge, adapting it for tasks may not give the effect of global alignment. For example, if we train a model for Task A from scratch, it only contains knowledge of Task A. Then adapting it to an irrelevant Task B may not align the knowledge of Task A and Task B well. We also conduct additional experiments to verify that our method is not equal to ‘not deviating from previous task learned parameters’. We test EWC [1] which purely constrains the parameter deviation during task learning and empirically find out that it does not outperform our approach in CL experiments. The EWC results are available in Table 2 in the rebuttal PDF. **2\. Measurement of closeness to the global prototype** We clarify that we do not encourage the parameters of the adaptation module to be close to the fixed (‘original’) model. Instead, we would expect that the adapted (i.e. final) models produce **representations** within the space of global prototypes. Specifically, global prototypes are representations of some base units (e.g. tokens in NLP). And representations of different data should be close to different global prototypes that are related to corresponding task predictions. An example is shown in Supplementary Material Figure 6. Based on that, we measure models’ ability to learn representations related to global prototypes by feeding the learned representations to the pre-trained decoder ($w_\delta$ in Eq. (5), recall that we have proto[$v$] = $w_\delta^v$) and predicting top-20 tokens from the decoder. We then compute the ratio of rationale tokens (i.e. tokens with task-specific information of the data) in the top-20 predictions. This measures how the predicted $\hat{p}(v|x_\tau, y_\tau)$ close to ground truth $p(v|x_\tau, y_\tau)$ in Eq. (2) based on global prototypes. We evaluate this on E-SNLI dataset, where data's rationale tokens are human annotated (main paper line 247-252). The results are shown in the main paper Figure. 3. Compared to PT2 and NeiAttn, Adapters show overall less scores for global alignment (at the right bottom), that’s why we conclude that PT2 and NeiAttn have better ability for global alignment. An example of predicted rationales is shown in the original Supplementary Material D.2. **3\. Question on $a_\tau$** Sorry for the confusion. The value of $a_\tau$ decides how strong the desiderata of global alignment is. We do not use $a_\tau$ in our training. The value of $a_\tau$ is used during evaluation to distinguish different models’ global alignment ability. We will illustrate it clearer in the paper. **4\. Question on Eq. (4)** Eq. (4) is **not** explicitly used in training because annotated rationale tokens (for $p(v|x_\tau, y_\tau$) in Eq. (2)) are not available for most datasets (main paper Line 173 - 177). **Reference** [1] Kirkpatrick et al., Overcoming catastrophic forgetting in neural networks, PNAS 2016
null
null
null
null
null
null
Real-World Image Super-Resolution as Multi-Task Learning
Accept (poster)
Summary: This paper revisits the real-world super-resolution task from the perspective of multi-task learning, considering each type of degradation as a separate task. However, there are countless types of degradation in the real world, which often results in severe task competition in previous methods. The authors propose a task grouping strategy to address this problem. Furthermore, they present a multi-task learning framework suitable for real SR tasks, guiding the target SR model by the task group label probability. Experimental results showcase an advanced performance improvement by the proposed TGSR across a wide range of degradation types. Strengths: This paper addresses real-world super-resolution tasks from the multi-task learning perspective, which I think is novel to this field. The paper effectively dissects the issues present in previous methods from a multi-task learning perspective and provide in-depth insight. I think that the degradation task grouping strategy proposed in the paper may sheds new light on the analysis and study of the blind SR tasks. Weaknesses: Detailed selection strategy for different degradation tasks in Section 3.1 and Figure 1 are recommended. Hyper-parameters in the proposed task grouping strategy, such as the number of task group and division threshold, are given and fixed in this paper, which is not motivated to me. I think these could lead to potential impact for the proposed method, but detailed analysis or ablation is missing is this paper. The authors should provide more comprehensive analyses to address these concerns. In the degradation type grouping strategy, the threshold for each group division is determined manually, and the paper does not mention ablation in the number of groups or threshold, which could lead to potential imprecision in the division strategy. Technical Quality: 3 good Clarity: 3 good Questions for Authors: My concerns regarding the evaluation data DIV2K5G and its close connection with the proposed TGSR model are valid. The potential for an inherent bias exists since the dataset was constructed using the proposed task grouping algorithm, which is closely tied with the iteratively trained and fine-tuned SR model. It would indeed be beneficial for the authors to evaluate the TGSR model on the standard benchmark dataset, which would provide a more independent measure of the effectiveness of the multi-task learning-based task grouping strategy. Additionally, performing comparisons on the RealSR dataset or a randomly generated benchmark by BSRGAN or RealESRGAN can be beneficial. These additional evaluations can provide further evidence to the superiority of the proposed TGSR. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: None. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer o9KK, We appreciate your thoughtful comments. Please find below our detailed response addressing your concerns. ### Q1. Detailed selection strategy for different degradation tasks in Section 3.1 and Figure 1 are recommended. Please note that we've stated in **Line 115** that the 100 degradation tasks (used for the illustration in Figure 1) were randomly sampled from the degradation model of RealESRGAN. In our main experiment, we also randomly generated 4000 degradation tasks using the RealESRGAN degradation model. ### Q2. Hyper-parameters in the proposed task grouping strategy, such as the number of task group and division threshold, are given and fixed in this paper, which is not motivated to me. I think these could lead to potential impact for the proposed method, but detailed analysis or ablation is missing is this paper. The authors should provide more comprehensive analyses to address these concerns. Please note that in **Section 7.3.4** of the supplementary materials, we've provided additional results and a discussion about the effect of the number of task groups. We find that as more and more groups are added, the rate of performance improvement for each new group becomes smaller. ### Q3. In the degradation type grouping strategy, the threshold for each group division is determined manually, and the paper does not mention ablation in the number of groups or threshold, which could lead to potential imprecision in the division strategy. Please note that in **Section 7.3.5** of the supplementary materials, we've provided an ablation study of the division threshold. We find that as the division threshold decreases, the performance gain becomes smaller. ### Q4. Performing comparisons on the RealSR dataset or a randomly generated benchmark by BSRGAN or RealESRGAN can be beneficial. These additional evaluations can provide further evidence to the superiority of the proposed TGSR. Thank you for the suggestion. Following the approach used by BSRGAN, we generate a new dataset DIV2K\_random by randomly adding degradations sampled from the degradation model of RealESRGAN to the DIV2K\_val dataset. Additionally, we conduct a comparison on the real benchmark AIM2019-val, the test set used for the real SR track in the AIM 2019 Challenge [1]. On both datasets, our TGSR outperforms state-of-the-art methods consistently and significantly. We will include the results in the final version. | | | ESRGAN|BSRGAN|RealESRGAN|SwinIR|DASR|TGSR| | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | | DIV2K\_random| PSNR |20.63 |23.76|23.54|23.13|23.52|**23.84**| | DIV2K\_random| LPIPS |0.6345 | 0.4622|0.4423|0.4432|0.4832|**0.4368**| | AIM2019-val | PSNR | 23.16 | 24.20 | 23.89 | 23.89 | 23.76 | **24.27** | | AIM2019-val | LPIPS | 0.5500 | 0.4000 | 0.3960 | **0.3870** | 0.4210 | **0.3899** | [1] AIM 2019 challenge on real-world image super-resolution: Methods and results. --- Rebuttal Comment 1.1: Comment: I have carefully read author's further feedback for all reviewers, and the additional information provided has address my concerns. However, echoing other reviewers' sentiments, the current manuscript falls short in terms of in-depth analysis and sufficient verification. There's definite potential for improvement in forthcoming revisions, especially with regards to intuitive observations, underlying motivations, and comprehensive comparative and ablation studies. A more rigorous refinement is pivotal for the manuscript to resonate with the stringent standards of NeurIPS. After careful consideration, I've opted to maintain my initial evaluation score. --- Reply to Comment 1.1.1: Comment: We appreciate your response and will include more details and your suggestion in our camera-ready version.
Summary: The authors rethink the real-world super-resolution problem from the perspective of multi-task learning. And point out the primary challenge: task competition problem. To address this issue, they propose a task grouping method to identify unsatisfactory tasks and introduce TGSR to handle them separately, thereby eliminating task competition. Extensive experiments demonstrate the effectiveness of the proposed method. Strengths: 1. Taking real-SR as a multi-task learning problem offers a novel perspective that can yield further insights and considerations. For instance, how to effectively sample degradation in a large degradation space? What relationships exist between different tasks? 2. The authors present a clear motivation in the form of task competition, as current real-SR networks are unable to perfectly handle all cases of degradation. 3. The paper includes extensive ablation studies. Interestingly, the authors found that even when fine-tuning directly on a single degradation task, many cases still do not perform as well as a range of degradation. This suggests that traditional blind super-resolution networks based on kernel estimation may no longer be applicable within large degradation spaces. This is a highly insightful finding. Weaknesses: 1. The authors should provide the calculation time of the performance indicator, although the performance indicator is obviously faster than directly fine-tuning a single-task network. 2. The groups 1-4 in Figure 4 of the main text seem to be inconsistent with the samples in the supp file, and Table 1 also shows that group 1 should be a low-quality difficult case (with the lowest PSNR value). Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: See above weakness, authors need to address them well. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 4 excellent Presentation: 3 good Contribution: 4 excellent Limitations: Yes. Besides, since this work still uses synthetic data. It would be better to consider low quality images from real scenes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer yhLt, We'd like to thank you for your positive feedback and address the concerns raised in your comments. ### Q1. The authors should provide the calculation time of the performance indicator, although the performance indicator is obviously faster than directly fine-tuning a single-task network. The computational time of the performance indicator for one degradation task is around 100 seconds. In contrast, it takes about 2 hours to fine-tune a single-task network. Hence, our approach offers a nearly 70-fold speed increase when compared to the direct fine-tuning of single-task networks for identifying unsatisfactory tasks. We will include the comparison and discussion of this matter in the final version. ### Q2. The groups 1-4 in Figure 4 of the main text seem to be inconsistent with the samples in the supp file, and Table 1 also shows that group 1 should be a low-quality difficult case (with the lowest PSNR value). Thank you for your careful reading. The order of groups 1-4 in Figure 4 has been incorrectly reversed, but the order in the supplement is correct. We will correct this error in the final version. --- Rebuttal Comment 1.1: Comment: This response has resolved my concerns. From my perspective, this paper highlights the task competition in real-SR problem and provide a reasonable solution. Overall, there's no work that has explored real SR from a multi-task view, so I think that this paper is insightful and can inspire further research due to this theoretical contribution. I have also read other reviews and the corresponding feedback, and although there may exist some small problems in this paper, the overall insight of this paper is significant, and the feedback also gives me some inspiration. I suggest authors add them in the revision. Since all of my concerns have been solved and I think other reviews would not affect its contribution, I would like to raise my score. --- Reply to Comment 1.1.1: Comment: We appreciate your feedback and suggestions. We will incorporate them into the final version. We also appreciate your constructive discussion with Reviewer 4Sx7.
Summary: This paper aims at the task conflict issue of real-world image super-resolution (SR) with multiple degradation tasks, and proposes a task grouping approach to group similar tasks together to mitigate task competition. In addition, this paper designs a real-SR network called TGSR (task grouping-based real-SR network), which leverages the identified task groups to train a task group classifier and use the predicted information to generate modulation signals for image restoration. Strengths: 1. a new look at real SR from a multi-task learning perspective. 2. a performance indicator based on gradient updates to efficiently identify the degradation tasks. Weaknesses: 1. The authors compare the PSNR distance between the real-SR model trained on entire degradation space introduced by Real-ESRGAN [34] and 100 types of single-task models fine-tuned on the specific-degradation, and highlight some degradation tasks not well solved by the real-SR model as unsatisfactory tasks. However, they only number 1-100 for different tasks, and do not present the detailed settings (blur type, kernel width, noise type, noise level, and jpeg compression) of each degradation task. Such inadequate expression makes readers including me confued as to whether there is a relationship between PSNR distance and degradation type. 2. The authors claim that "... divide them (unsatisfactory degradation tasks) into multiple task groups based on their similarity to reduce negative transfer." However, the task similarity computed by the PSNR distance is not convincing, and it is worth to further studying the relationship between the task grouping strategy and the degradation distribution. 3. At lines 177-178, "in this study, we use the found degradation task groups to train a task group classifier, which is then integrated into a standard real-SR framework". I want to know how to select the number of task groups, and how do different numbers of groupings affect the final performance. If the number of groupings is set to infinity, is the task group classifier equivalent to the well-studied degradation estimator? Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: 1. AT lines 166-167, "we make a trade-off to fine-tune the pre-trained real-SR network on all unsatisfactory tasks simultaneously through joint-training". What is the joint learning in this paper? 2. It is unfair to conduct the evaluation experiments on the grouped DIV2k (namely, DIV2K5G) with 5 different degradation groups. The authors should also conduct a fair comparison with Sota methods on the well-used benchmarks, such as SwinIR [A1], DASR [A2]. [A1] Liang J, Cao J, Sun G, et al. Swinir: Image restoration using swin transformer[C]//Proceedings of the IEEE/CVF international conference on computer vision. 2021: 1833-1844. [A2] Wang L, Wang Y, Dong X, et al. Unsupervised degradation representation learning for blind super-resolution[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2021: 10581-10590. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: none Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer 4Sx7, We appreciate your feedback and would like to address the concerns raised in your comments, which include some factual errors and misunderstandings. ## Factual Errors & Misunderstandings ### Q1. The authors compare the PSNR distance... However, they only number 1-100 for different tasks, and do not present the detailed settings (blur type, kernel width, noise type, noise level, and jpeg compression) of each degradation task. Such inadequate expression makes readers including me confued as to whether there is a relationship between PSNR distance and degradation type. (1) Please note that **the 100 degradation tasks are only used for illustration in Figure 1**, as stated in **Line 115**. (2) Please note that we sampled **4,000** tasks for the main experiments (**Line 198**) and **10,000** tasks for the experiments in the supplementary materials (**Line 460**), which can sufficiently represent the degradation space. To efficiently identify the unsatisfactory tasks from such a large number of tasks, we do not train a single-task network for each task, but propose a gradient-based performance indicator as described in **Lines 138-148**. (3) Please also note that due to the complex combinations of degradations in a large degradation space (e.g., RealESRGAN degradation model), it is very difficult to group degradation tasks by their types, as pointed out in [1]. ### Q2. The authors claim that "... divide them (unsatisfactory degradation tasks) into multiple task groups based on their similarity to reduce negative transfer." However, the task similarity computed by the PSNR distance is not convincing, and it is worth to further studying the relationship between the task grouping strategy and the degradation distribution. (1) The task grouping approach (e.g., the CVPR-2018 best paper: Taskonomy [2]) widely used in multi-task learning [2,3,4] focuses on finding the tasks that should be trained together based on the performance differences (e.g., PSNR distance) between the multi-task network and single-task networks. (2) As mentioned in [4], the term “task similarity” can easily be misunderstood to imply a strong attribute relationship between tasks. In fact, the task similarity in our paper represents the affinity relationship between the single-task network and the multi-task (real-SR) network, rather than the similarity between degradation types. We will further clarify this in the final version. (3) The consensus in the field recognizes that improved performance, such as a higher PSNR, contributes to superior visual outcomes. Our experimental findings further validate the achievement of state-of-the-art results. ## Other Comments ### Q3. At lines 177-178, "in this study, we use the found degradation task groups to train a task group classifier, which is then integrated into a standard real-SR framework". I want to know how to select the number of task groups, and how do different numbers of groupings affect the final performance. If the number of groupings is set to infinity, is the task group classifier equivalent to the well-studied degradation estimator? Please note that in **Section 7.3.4 (Line 490)** of the supplementary materials, we've provided additional results and a discussion about the effect of the number of task groups. The number of task groups is determined by the threshold (**Line 206**) of performance improvement. We find that as more and more groups are added, the rate of performance improvement for each new group becomes smaller. Our approach differs from the degradation estimation method, which estimates every degradation task, because we have a limited group number of unsatisfactory tasks. ### Q4. AT lines 166-167, "we make a trade-off to fine-tune the pre-trained real-SR network on all unsatisfactory tasks simultaneously through joint-training". What is the joint learning in this paper? Joint-training refers to training a single real-SR network with multiple (e.g., thousands of) degradation tasks simultaneously, which is the strategy used by most existing real-SR methods such as RealESRGAN and BSRGAN, as stated in **Lines 139-140**. ### Q5. It is unfair to conduct the evaluation experiments on the grouped DIV2k (namely, DIV2K5G) with 5 different degradation groups. The authors should also conduct a fair comparison with Sota methods on the well-used benchmarks, such as SwinIR [A1], DASR [A2]. Thank you for the suggestion. Following the approach used by SwinIR and BSRGAN, we generate a new dataset DIV2K\_random by randomly adding degradations sampled from the degradation model of RealESRGAN to the DIV2K\_val dataset. Additionally, we conduct a comparison on the real benchmark AIM2019-val, the test set used for the real SR track in the AIM 2019 Challenge [5]. On both datasets, our TGSR outperforms state-of-the-art methods consistently and significantly. We will include the results in the final version. | | | ESRGAN|BSRGAN|RealESRGAN|SwinIR|DASR|TGSR| | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | | DIV2K\_random| PSNR |20.63 |23.76|23.54|23.13|23.52|**23.84**| | DIV2K\_random| LPIPS |0.6345 | 0.4622|0.4423|0.4432|0.4832|**0.4368**| | AIM2019-val | PSNR | 23.16 | 24.20 | 23.89 | 23.89 | 23.76 | **24.27** | | AIM2019-val | LPIPS | 0.5500 | 0.4000 | 0.3960 | **0.3870** | 0.4210 | **0.3899** | [1] Crafting Training Degradation Distribution for the Accuracy-Generalization Trade-off in Real-World Super-Resolution. ICML2023. [2] Taskonomy: Disentangling task transfer learning [3] Conflict-averse gradient descent for multi-task learning [4] Efficiently Identifying Task Groupings for Multi-Task Learning [5] AIM 2019 challenge on real-world image super-resolution: Methods and results --- Rebuttal Comment 1.1: Comment: Thanks for the rebuttal and it has addressed some of my concerns. In my opinion, the group classifier works like a degradation classifier that identifies LR images with specific degradations, which can be validated in Fig. 4. Instead of learning degradation representations that distinguish different degradation types [33], this classifier simplify the degradation representation learning task to a classification task on a set of groups. Still, I have several concerns: (1) I wonder the contribution of the CE term in Eq. 6. Without the CE loss term, the classifier is opimized according to the SR loss and I wonder whether this can produce better performance than the heuteristic approach used in this paper for grouping. (2) I agree with Reviewer LkK8 that the performance distance may be caused by many factors and this indicator seems not very convincing to measure the task competition. --- Reply to Comment 1.1.1: Comment: #### 1. I wonder the contribution of the CE term in Eq. 6. Without the CE loss term, the classifier is opimized according to the SR loss and I wonder whether this can produce better performance than the heuteristic approach used in this paper for grouping. Thank you for your question. Following your suggestion, we've conducted the experiment and the results are presented in the table below. The results clearly show that TGSR w/o CE, which optimizes the SR network soley with the SR loss, only exhibits slight improvement over RealESRGAN on groups 0, 2, and 4, whereas it performs considerably worse than our proposed TGSR. This serves as validation for the effectiveness of our task grouping method, wherein we identify underperforming tasks with comparable performance levels and train them collectively. It highlights the positive impact of training similar tasks together, aligning with previous findings in the multi-task learning literature. | | | Group0 | Group1 | Group2 |Group3 |Group4 | |------ |------ | ------- | ------- |------- |------- |------- | | RealESRGAN | PSNR | 23.85 | 20.10 | 22.07 | 24.30 | 24.58 | | | LPIPS | 0.4325| 0.5355 | 0.4701 | 0.4147 | 0.3970 | | TGSR w/o CE| PSNR | 23.94 | 20.08 | 22.16 | 24.17 | 24.75 | | | LPIPS | 0.4303 | 0.5348 | 0.4696 | 0.4155 | 0.3963 | | TGSR | PSNR | **23.99** | **21.10** | **23.15** | **24.62** | **25.03** | | | LPIPS | **0.4286** | **0.5056** | **0.4494** | **0.3975** | **0.3851** | #### 2. I agree with Reviewer LkK8 that the performance distance may be caused by many factors and this indicator seems not very convincing to measure the task competition. First, please note that *performance indicators that leverage a comparison between the multi-task network and single-task networks have been extensively employed in the field of multi-task learning* (e.g., the quality rate in [1] and the relative performance indicator in [2] and [3]). Our proposed performance indicator follows the same design principle. Second, as stated in **Lines 114-124**, the only variable we adjusted is the number of degradation tasks. Therefore, the performance drop can be soley attributed to the increased number of tasks. This observation aligns with the principles of multi-task learning [1,2,3], where the phenomenon is commonly referred to as task competition, as also mentioned in **Lines 111-114**. [1] Zamir A R, Sax A, Shen W, et al. Taskonomy: Disentangling task transfer learning[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2018: 3712-3722. [2] Fifty, C., Amid, E., Zhao, Z., Yu, T., Anil, R., & Finn, C. (2021). Efficiently identifying task groupings for multi-task learning. Advances in Neural Information Processing Systems, 34, 27503-27516. [3] Standley, T., Zamir, A., Chen, D., Guibas, L., Malik, J., & Savarese, S. (2020, November). Which tasks should be learned together in multi-task learning?. In International Conference on Machine Learning (pp. 9120-9132). PMLR.
Summary: This paper models a real-world image super-resolution (real-SR) from a multi-task learning perspective, that is treat real-SR as solving multiple distinct degradation tasks. To this end, the authors propose a task-grouping approach by grouping similar tasks together. Extensive experiments demonstrate the effectiveness of the proposed method. Strengths: The authors regard learning a real-SR model as a multi-task learning task and highlight the task competition problem. The authors develop a task grouping-based real-SR network (TGSR). Weaknesses: 1. The novelty of this paper should be highlighted. This paper refers to the tasks that are solved by the real-SR network well or not as unsatisfactory and satisfactory tasks. Such a way is similar to [1] which classifies the images as simple, medium and hard. It would be better to highlight the novelty of the method. [1] ClassSR: A General Framework to Accelerate Super-Resolution Networks by Data Characteristic 2. The multi-task Real-SR method seems to be not smart. There are infinite number of degradations. The method randomly samples 100 degradation tasks and obtains 100 fine-tuned models. 3. The task-grouping approach groups similar tasks together. The similarity is based on the performance. It means that the low noisy degradation and weak blurry degradation are similar. However, low noise and strong noise are the same degradations, which should be treated as the same task. 4. In Figure 1 (a), this paper uses the degradation models which are based on Real-ESRGAN. Each degradation on an image is a composite function which different degradations. Could you investigate the effect of such a case? 5. The authors identify the unsatisfactory tasks by comparing the performance between the jointly-trained multi-task real-SR network and the fine-tuned single-task networks. What if directly compare the PSRN (like [1]) with a threshold? 6. Some experiment details are not clear. Is the real-SR model fine-tuned on a pre-trained Real-ESRGAN or trained from scratch? 7. The performance is not state-of-the-art. In Table 1, the proposed method is worse than RDSR. In Table 2, RealESRGAN-G has no advantage over RealESRGAN for Group0 and Group1. How to demonstrate the superiority of the proposed method. In Figure 5, the input images have much noise, not low-resolution. It would be better to provide SR results. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: Please refer to the Weaknesses. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: The authors do not address the limitations in the paper. It would be better to include the above suggestions for improvement. Flag For Ethics Review: ['No ethics review needed.'] Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer tGSk, Thank you for your feedback. We'd like to address some factual inaccuracies in your comments and clarify the misunderstandings. ## Factual Errors & Misunderstandings ### Q1. The multi-task Real-SR method seems to be not smart. There are infinite number of degradations. The method randomly samples 100 degradation tasks and obtains 100 fine-tuned models. Please note that **the 100 degradation tasks are only used for illustration in Figure 1 (Line 115)**. We sampled **4,000** tasks for the main experiments (**Line 198**) and **10,000** tasks for experiments in the supplementary materials (**Line 460**). To efficiently identify the unsatisfactory tasks from such a large set of tasks, we do not train a single-task network for each task, but propose a gradient-based performance indicator as described in **Lines 138-148**. As also noticed by Reviewer yhLt, it is obviously much faster than directly fine-tuning a single-task network. ### Q2. The performance is not state-of-the-art. (i) In Table 1, the proposed method is worse than RDSR. (ii)In Table 2, RealESRGAN-G has no advantage over RealESRGAN for Group0 and Group1. How to demonstrate the superiority of the proposed method. (iii)In Figure 5, the input images have much noise, not low-resolution. It would be better to provide SR results. (i) Please note that RDSR is trained with the MSE loss and hence has higher PSNR than GAN-based SR methods, which is normal. (ii) RealESRGAN-G is not our method but used to analyze the performance of RealESRGAN in different groups (**Lines 272-276**). It can be seen clearly that our TGSR outperforms RealESRGAN. (iii) Figure 5 shows real-SR results. Please note that real-SR encompasses multiple degradations, including noise. In the supplementary file (**Section 7.4.1 and 7.4.2**), we've provided additional real-SR results, which include degradations with slight noise. ### Q3. The novelty of this paper should be highlighted. This paper is similar to ClassSR. What if directly compare the PSRN (like ClassSR) with a threshold? We'd like to highligt the novelty of our method by clarifying two basic concepts. #### 1. **Efficient-SR vs. Real-SR** (1) We are very familiar with ClassSR, which solves the efficient SR problem under the classical SR setting and only has one degradation task, i.e., bicubic down-sampling. Hence, one GT image corresponds to one LR image. ClassSR divides the LR images into simple, middle, and hard texture groups by absolute PSNR value. (2) Real-SR (e.g., BSRGAN and Real-ESRGAN) introduces a complex degradation model to synthesize LR images, which include a series of degradation types, such as blur, noise, resize, and JPEG. Hence, one GT image corresponds to multiple LR images. It is clear that hard textures may contain slight and strong degradations. Therefore, there is no linear correlation between absolute PSNR values and unsatisfactory degradation tasks. #### 2. **Absolute PSNR vs. Relative PSNR** (1) The objective of Real-SR is to achieve improved performance across all degradation tasks. In Section 3, we observe a range of absolute PSNR values, from 19dB to 25dB, on the validation set. It is worth noting that unsatisfactory degradation tasks can encompass both slight (e.g., 24dB) and strong (e.g., 21dB) degradations. Therefore,**relying solely on absolute PSNR values is inadequate for identifying unsatisfactory tasks**. (2) To address this issue, we utilize the performance difference, specifically the PSNR distance between single-task networks and the multi-task Real-SR network, as a means to identify unsatisfactory tasks. This approach is rooted in the common understanding within the field of multi-task learning [1,2,3] and is exemplified in notable works such as the CVPR 2018 best paper Taskonomy [2]. By considering the performance difference, we can more effectively identify tasks that fall short of expectations and require further attention. ### Q4. Low noise and strong noise are the same degradations, which should be treated as the same task. (1) The task grouping approach (e.g., the CVPR-2018 best paper: Taskonomy [2]) widely used in multi-task learning [1,2,3] focuses on finding the tasks that should be trained together based on the performance differences (e.g., PSNR distance) between the multi-task network and single-task networks. (2) As mentioned in [1], the term “task similarity” can easily be misunderstood to imply a strong attribute relationship between tasks. In fact, the task similarity in our paper represents the affinity relationship between the single-task network and the multi-task (real-SR) network, rather than the similarity between degradation types. We will further clarify this in the final version. (3) The consensus in the field recognizes that improved performance, such as a higher PSNR, contributes to superior visual outcomes. Therefore, if performance enhancements can be attained, grouping weak blur and noise together is deemed acceptable. Our experimental findings further validate the achievement of state-of-the-art results. ### Q5. Each degradation of RealESRGAN is a composite function which different degradations. Could you investigate the effect of such a case? In our response to Q4, we've clarified that our task grouping method is not based on degradation types but performance. In addition, it is difficult to analyze the composite function in a large degradation space, as pointed out in [4]. ## Other Comments ### Q6. Is the real-SR model fine-tuned on a pre-trained Real-ESRGAN? In our experiment, the real-SR model is fine-tuned on a pre-trained Real-ESRGAN. We will make this clear in the final version. [1] Efficiently Identifying Task Groupings for Multi-Task Learning [2] Taskonomy: Disentangling task transfer learning [3] Which Tasks Should Be Learned Together in Multi-task Learning? [4] Crafting Training Degradation Distribution for the Accuracy-Generalization Trade-off in Real-World Super-Resolution --- Rebuttal 2: Comment: Dear Reviewer tGSk, As the discussion period is closing soon, we would really appreciate if you would let us know whether your concerns have been resolved. We would be happy to discuss with you if you have further questions. Thank you very much for your time! Regards, Authors
null
NeurIPS_2023_submissions_huggingface
2,023
Summary: This paper casts real-world image SR as a multi-task learning task for complex image degradations. By comparing the performance gap between the real-SR model and the fine-tuned model for a specific degradation, the satisfactory task and unsatisfactory task are divided. And thus it proposes a task grouping method to address real-world SR. Strengths: + The paper is well-written and easy to understand. + It is interesting to analyze the real-world image SR from a multi-task learning view. + The paper has considered many degradations for evaluation. Weaknesses: - The main concern is about the motivation. As claimed in Line 36-37, real SR faces a challenge of task competition or task conflict. But, 1) it is still blind to explicitly represent or accurately model specific real degradations for the task of real SR. What is the meaning of "task" for real SR? What is the evidence of the "task conflict"? 2) Although authors use the performance gap to define satisfactory task and unsatisfactory task, it seems to have a prerequisite that the results of pre-trained real-SR netwrok and single-task networks on each degradation task are reliable, especially based on their PSNR distance (Line 119). This is rather unconvincing. It is hard for me to believe this demonstration for the motivation in Sec.3 and Fig.1. 3) Line 126: "Our analysis in Sec. 3 shows that when a real-SR network is tasked with many degradation tasks, they may compete for model capacity or interfere with each other, resulting in a significant decline in performance for certain tasks." I could not understand what are the evidences for the claimed completition. Besides, the performance decline may be not cetainly resulted from the claimed completition and be influenced by other factors. Overall, the analyses on the motivation are rather unconvincing and it is hard for me to believe the rationality of the work. - Despite the analyses in Sec.3, it does not explain what cases the satisfactory task or unsatisfactory task actually indicate (it is too simple based on a PSNR threshold) and why they have these differences. I think this explain is important to well understand the real SR task and the motivation of taking it as a multi-task learning. Besides, how to undestand the derived degradation task groups in Fig.4? - Method (Sec.4). It could not ensure the reliability of using pre-trained real-sr network and single-task networks. - Evaluation: 1. Line 204: "we evaluate the model on Set14...." But, Set14 has very limited number of images. Is this concincing? 2. All the evaluations are based on RealESRGAN and there is no any evaluation on real data for real degration, not generated degradation by gan-based model. This is a very important demonstration extension for the work. - Technical Quality: 2 fair Clarity: 3 good Questions for Authors: I have listed and explained my concerns on this paper in "Weakness". I expect the responses to those issues. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: Authors just claim one limitation of the sampling from the degradation space in "Conclusion". This is actually an important issue for this work. Thus the more demonstration on real data, not from realESRGAN would be more convincing. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer LkK8, Thank you for your feedback. We’d like to provide some contextual information to help you better understand our method and address your concerns. ## Task Grouping in Multi-task Learning (1) The task grouping approach (e.g., the CVPR2018 best paper: Taskonomy [2]) widely used in multi-task learning [1,2,3] focuses on finding the tasks that should be trained together based on the performance differences (e.g., PSNR distance) between the multi-task network and single-task networks. (2) As mentioned in [1], the term “task similarity” can easily be misunderstood to imply a strong attribute relationship between tasks. In fact, the task similarity in our paper represents the affinity relationship between the single-task network and the multi-task (real-SR) network, rather than the similarity between degradation cases. We will further clarify this in the final version. ## Concerns ### Q1. It is still blind to explicitly represent or accurately model specific real degradations for the task of real SR. What is the meaning of "task" for real SR? What is the evidence of the "task conflict"? We've provided the definition of "task" in **Lines 93-110**. It is important to note that our task grouping method is not based on the similarity of degradation types but performance. The concept of "task conflict/competition" is explained in **Lines 111-124**. ### Q2. Despite the analyses in Sec.3, it does not explain what cases the satisfactory task or unsatisfactory task actually indicate (it is too simple based on a PSNR threshold) and why they have these differences. Besides, how to understand the derived degradation task groups in Fig.4? (1) Basically, the unsatisfactory tasks are the degradation cases not well solved by the jointly-trained real-SR network. Please refer to **Lines 120-124** for the explanation of satisfactory and unsatisfactory tasks. (2) From Table 1, we can see that the performance of unsatisfactory tasks include both simple degradation cases (25dB in Group 4) and difficult degradation cases (21dB in Group 1), while the performance of satisfactory tasks (Group 0) is around 24dB. This observation indicates that the real-SR network tends to learn the degradations of medium difficulty. ### Q3. Although authors use the performance gap to define satisfactory task and unsatisfactory task, it seems to have a prerequisite that the results of pre-trained real-SR netwrok and single-task networks on each degradation task are reliable, especially based on their PSNR distance (Line 119). This is rather unconvincing. It is hard for me to believe this demonstration for the motivation in Sec.3 and Fig.1. As mentioned in **Line 210**, the pre-trained real-SR model we used is based on RealESRGAN, a highly influential and well-recognized work in the SR field with **over 22k stars on GitHub**. Using a single-task network as an empirical performance upper bound is a common practice, as demonstrated by works such as AdaFM[6], CResMD[7], and the accuracy-generalization trade-off[4]. ### Q4. Method (Sec.4). It could not ensure the reliability of using pre-trained real-sr network and single-task networks. Please see our response to Q3. ### Q5. I could not understand what are the evidences for the claimed completition. Besides, the performance decline may be not cetainly resulted from the claimed completition and be influenced by other factors. As described in **Lines 114-124**, the only variable we changed is the degradation task number. Therefore, the performance drop is only affected by the task number. This observation is consistent with multi-task learning, where this phenomenon is referred to as task competition, as also stated in **Lines 111-114**. ### Q6. Evaluation: Line 204: "we evaluate the model on Set14...." But, Set14 has very limited number of images. Is this concincing? All the evaluations are based on RealESRGAN and there is no any evaluation on real data for real degration, not generated degradation by gan-based model. This is a very important demonstration extension for the work. As stated in **Line 202**, we evaluate the model on Set14 for **each unsatisfactory degradation task**. Our main experiment employs 4000 Set14 test sets, each corresponding to a distinct degradation task. Due to space limitations, we have included the results on real data in the supplementary materials, as detailed in **Section 7.4.3**. Thank you for the suggestion. Following the approach used by SwinIR and BSRGAN, we generate a new dataset DIV2K\_random by randomly adding degradations sampled from the degradation model of RealESRGAN to the DIV2K\_val dataset. Additionally, we conduct a comparison on the real benchmark AIM2019-val, the test set used for the real SR track in the AIM 2019 Challenge. On both datasets, our TGSR outperforms state-of-the-art methods consistently and significantly. We will include the results in final version. | | | ESRGAN|BSRGAN|RealESRGAN|SwinIR|DASR|TGSR| | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | | DIV2K\_random| PSNR |20.63 |23.76|23.54|23.13|23.52|**23.84**| | DIV2K\_random| LPIPS |0.6345 | 0.4622|0.4423|0.4432|0.4832|**0.4368**| | AIM2019-val | PSNR | 23.16 | 24.20 | 23.89 | 23.89 | 23.76 | **24.27** | | AIM2019-val | LPIPS | 0.5500 | 0.4000 | 0.3960 | **0.3870** | 0.4210 | **0.3899** | [1] Efficiently Identifying Task Groupings for Multi-Task Learning [2] Taskonomy: Disentangling task transfer learning [3] Which Tasks Should Be Learned Together in Multi-task Learning? [4] Crafting Training Degradation Distribution for the Accuracy-Generalization Trade-off in Real-World Super-Resolution [5] Unsupervised degradation representation learning for blind super-resolution [6] Modulating image restoration with continual levels via adaptive feature modification layers [7] Interactive multi-dimension modulation with dynamic controllable residual learning for image restoration --- Rebuttal Comment 1.1: Comment: Thanks so much for the authors' responses and so sorry for my late comment. I have carefully read the rebuttal, but it's a pity that my concerns are not well addressed. For example, about "task" and "task conflict", I carefully read the whole paper when reviewing. I am concerned about their unconvincing and intuitive descriptions without evidence and thus I have the question (Q1). But the authors just provide the details in the paper: Lines 93-110, 111-124. Besides, for Q6, for the results of "real data", I have checked the main paper and the supplementary. It still does not provide the result of real data, e.g., RealSR (ICCV2019), not data from RealESRGAN or AIM2019-val. Overall, I still keep my rating. --- Reply to Comment 1.1.1: Comment: Dear Reviewer LkK8, Thank you for your response and for highlighting the specific areas that you find unclear. We appreciate your valuable feedback and would like to provide further clarification to address your questions. We sincerely hope that you will take our additional response into consideration and reconsider your evaluation. Q1. Unconvincing and intuitive descriptions of "task" and "task conflict" for real SR. >In simple terms, an SR task involves restoring a low-resolution image that has undergone a certain degradation. Real-SR assumes that a clear image can undergo various degradations sampled from a large degradation space. For instance, if there are 1000 different possible degradations in the degradation space, a real-SR network is trained to simultaneously address these 1000 degradation tasks, making it a form of multi-task learning. The objective is to enable the network to handle a wide range of degradations. We'd like to emphasize that our formulation of real-SR as a multi-task learning problem (Sec. 3) is rigorous. >Task conflict/competition is a fundamental challenge in multi-task learning. It stems from limited model capacity and shared resources. When a model is trained to simultaneously solve multiple tasks, the model must allocate its finite resources, like parameters and computational capacity, to handle each task effectively. However, this allocation creates a trade-off where optimizing one task may come at the expense of others. Figure 1 provides clear evidence that the trained real-SR (multi-task) network favors some degradation tasks over the others, i.e., it is capable of producing satisfactory results for only half of the degradation tasks, while falling short in the remaining half of the degradation tasks. >We hope that our explanation has adequately addressed your concerns. Please do let us know if you have any further questions. Q2: For Q6, for the results of "real data", I have checked the main paper and the supplementary. It still does not provide the result of real data. >Please note that we've provided results on real-world images in **Section 7.4.3 (page 19) of the Supplementary Material**. These real-world images come from the test set Realworld38, which contains many **real scenes**, such as **old photo scene** (rows 1&2 on page19), **building scene** (row 3 on page19), **greyscale scene** (row 4 on page19), and **web scene** (i.e., images downloaded from the internet) (row 5 on page19). Realworld38 was previously employed in widely recognized studies like SwinIR [1] and DASR [2], and we have merely followed these popular works. For example, the real image OST_009 (which is used in Figure 5 of SwinIR) can be found in the third row of Figure 15 in Section 7.4.3 of our supplementary material (page 19). >We apologize for the oversight in not mentioning Realworld38. We will include the information in the final version. [1] Liang J, Cao J, Sun G, et al. Swinir: Image restoration using swin transformer[C]//Proceedings of the IEEE/CVF international conference on computer vision. 2021: 1833-1844. [2] Liang J, Zeng H, Zhang L. Efficient and degradation-adaptive network for real-world image super-resolution[C]//European Conference on Computer Vision. Cham: Springer Nature Switzerland, 2022: 574-591. Title: Please take a moment to review our additional response --- Rebuttal 2: Comment: Dear Reviewer LkK8, As the discussion period is closing soon, we would really appreciate if you would let us know whether your concerns have been resolved. We would be happy to discuss with you if you have further questions. Thank you very much for your time! Regards, Authors
null
null
null
null
null
null
Data-driven Optimal Filtering for Linear Systems with Unknown Noise Covariances
Accept (poster)
Summary: The paper builds on the duality between estimation and control, in order to develop an online data-driven method for MSE-optimal filtering of linear systems with linear observations. Process and observation noise covariances are considered unknown, but stochastic states are assumed to be bounded. The paper proposes using SGD in the space of steady-state stabilizing gains, claims asymptotic convergence to the optimal gain, and provides an asymptotic probabilistic bound on the deviation from optimal error. Strengths: Although the use of online optimization for controlling linear-quadratic settings is not new (e.g. [r1]), exploiting the duality between control and filtering problems is novel and interesting in this context. The concentration and error bounds are not trivial and useful, as concentration bound is non-asymptotic in the series length T. Proof of SGD convergence and error bounds are novel as well, even if derived under very strong assumptions. [r1] Cohen A. et al., "Online Linear Quadratic Control", 2018 Weaknesses: Overall, I believe the authors did their best that the paper will be well-organized and clear (as it can be for a rather technical manuscript). The explanations and given outlines before each section are indeed helpful. However, it is still very hard to follow the assumptions and constant definitions, and some of the conclusions. Writing becomes very laconic at some crucial points (e.g. Thm. 2, Remark 7). Particularly, Thm.2 and it's proof are not clear hence it is hard to get convinced in their soundness. Furthermore, In the introduction it is stated that convergence is guaranteed from every initial policy, but according to Theorem 2 the policy cannot enter (or start in) a class of policies where gradient is smaller than some constant. I didn't find any discussion about when trajectories enter this region. My impression is that this is a good paper, and I might be missing something. I will be willing to raise my score when given a more detailed proof and this clarification. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: I ask for a more detailed proof for Thm. 2 (see above). In addition, a summary of all assumptions, results and notations can be very useful. Minor typos: l. 249 't' should be replaced by $\gamma$. l. 276 I think i should go between 1 and T-1. l. 309 quite. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: The online method uses a surrogate loss using the observations, this is a reasonable choice due to lack of ground-truth states and the observability, which is a strong assumption. The paper should discuss the implications of using this loss with more general (i.e. non observable) systems. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Q**: *... However, it is still very hard to follow the assumptions...* **A**: Please see the Response to All for clarification on the technical results and the changes we made for the revision. **Q**: *...Particularly, Thm.2 and it's proof are not clear ...* **A**: We provide the following observation that clearly states the implications of Theorem 2 followed by its detailed proof: *Observation 1*. Combining Theorem 2 and the PL property (11a) results in a sample complexity for our algorithm. In particular, it follows that $J(L_k)-J(L^*)\leq \varepsilon$, if $s_0\leq \frac{\sqrt{c_1(\alpha) \varepsilon}}{4}$, $s\leq \frac{1}{4}$, and the number of steps $k>\ln(\frac{\varepsilon}{\alpha})/\ln(1-\frac{c_1(\alpha)}{18\ell(\alpha)}))$. While the original version of Theorem 2 is stated for any choice of $\epsilon \in (0,1)$, for simplicity we present its proof for the special case of $\epsilon = 1/2$: *Proof of Theorem 2.* The first step of the proof is to show that the assumption of the Proposition 2 is satisfied for $E=\nabla \widehat J(L)$ for all $L \in \mathcal{S}\_\alpha \setminus \mathcal{C}\_{\gamma/2}$. This is true because $$\begin{aligned} \|\nabla \widehat J(L) - \nabla J(L)\| \leq s\|\nabla J(L)\| + s_0 \\ \leq s\|\nabla J(L)\| + \frac{\gamma}{2} \|\nabla J(L)\|\\ \leq \gamma \|\nabla J(L)\| \end{aligned}$$ where the first inequality follows from Assumption 3, the second inequality follows from $L \not \in C_{\gamma/2}$ (i.e. $s_0\|\nabla J(L)\|\geq \gamma/2$), and the last step follows from the assumption $s\leq \gamma/2$.\ The rest of the proof relies on repeated application of Proposition 2. In particular, starting from $L_0 \in \mathcal{S}\_\alpha \setminus \mathcal{C}\_{\gamma/2}$, the application of Proposition 2 implies that $L_1\coloneqq L_0 - \bar\eta \nabla\widehat J(L_0)$ remains in the same sublevel set, i.e., $L_1 \in \mathcal{S}\_{\alpha}$, and we obtain the following linear decay of the cost value: $$J(L_1) -J(L^*) \leq \left[1-c_1(\alpha)\bar\eta{(1 - \gamma)}/2\right] [J(L_0)- J(L^*)].$$ Now, if $L_1 \in \mathcal{C}\_{\gamma/2}$ then we stop; otherwise $L_1 \in \mathcal{S}\_\alpha \setminus \mathcal{C}\_{\gamma/2}$ and we can repeat the above process to arrive at $$J(L_2) -J(L^*) \leq \left[1-c_1(\alpha)\bar\eta{(1 - \gamma)}/2\right]^2 [J(L_0)- J(L^*)].$$ Repeating the process generates a sequence of policies $L_0, L_1, L_2 ...$ with a combined linear decay of $$J(L_k) -J(L^*) \leq \left[1-c_1(\alpha)\bar\eta{(1 - \gamma)}/2\right]^k [J(L_0)- J(L^*)],$$ unless at some iteration $j$, we arrive at a policy $L_j$ such that $L_j \in \mathcal{C}\_{\gamma/2}$. This completes the proof. ◻ **Q**: *...convergence is guaranteed from every initial policy, but...I might be missing something...* **A**: We think the missing point here is the PL property of the cost that seems to understate the result of Theorem 2. As shown in Lemma 2 (and discussed in Remark 3), the cost maintain PL property on each sublevel set. In particular (11a) implies that on each $\mathcal S_\alpha$ , $\|\nabla J (L)\|\_F$ characterizes the optimality gap $J(L) - J(L^*)$ by: $$c_1(\alpha) [J(L) - J(L^*)] \leq \|\nabla J(L)\|\_F^2$$ for some constant $c_1(\alpha)$. This implies that if we have arrived at a candidate policy $L_k$ for which the gradient is small, then the optimality gap should be small (involving the constant $c_1(\alpha)$ that is independent of $L_k$). This is the reason that in Theorem 2, it suffices to argue about the generated sequence $L_k$ to have a linear decay unless entering a small neighborhood of $L^*$ containing policies with small enough gradients (denoted by $\mathcal{C}\_\tau$). In particular, if for some $j<k$, we arrive at some policy $L_j \in \mathcal{C}\_\tau$, then by (11a) we can conclude that: $$J(L_j)-J(L^*) \leq \frac{1}{c_1(\alpha)} \|\nabla J(L_j)\|\_F^2 \leq \frac{s_0^2}{c_1(\alpha) \tau^2},$$ which is directly controlled by the bias term $s_0$. This is the bound used also in the argument of Remark 7. We believe this is the missing point for causing the confusion about what happens when the trajectories enter this region. Additionally, as stated in the introduction, every (stabilizing) initial policy $L_0$ amounts to a finite value of $J(L_0)$ and thus lies in some sublevel set $S_\alpha$. So, starting from such $L_0$, Theorem 2 guarantees linear decay of the optimality gap till the trajectory enters that small neighborhood. Finally, note that the radius of this neighborhood $\mathcal{C}_\tau$ is characterized by the bias term $s_0$ which itself is exponentially decaying to zero in the trajectory length $T$. **Q:** *... a summary of all assumptions, results....* **A**: Assumption 1 sates that the linear system is detectable. This is the minimum requirement to make the estimation problem well-posed. Assumption 2 states that the linear system is observable. This stronger assumption is made in lieu of Assumption 1 to improve the clarity of the analysis with less system theoretic technicalities. Assumption 3 states an error bound on the gradient oracle. This is made in order to provide an independent analysis of the SGD algorithm for locally Lipschitz functions in presence of gradient bias. This assumption is later verified by Thm. 3. Assumption 4 assumes bounded process and observation noise. This is made to facilitate application of matrix concentration inequalities. Finally, we now have a complete nomenclature that is added to the supplementary materials. **Q**: *The online method uses a surrogate loss...* **A:** If the system is non-observable there is no guarantee for existence of a stabilizing policy unless the system is detectable (Assumption 1). In fact, detectability is a necessary and sufficient condition for well-posedness. While our analysis is presented for observable systems, we believe that the extension to detectable case is possible by adopting more system theoretic tools in Lemma 1. --- Rebuttal Comment 1.1: Comment: I appreciate the time and efforts made to address my (and other reviewers) concerns. After carefully reading the response, I believe the missing points in Thm2's proof are now clear. I would recommend to clarify them in the text in order to improve readability for a broader audience. I will raise my score, accordingly.
Summary: This submission examines the learning of the Kalman filter gain for linear systems with unknown covariance matrices using noisy output data. Similar to learning the linear quadratic regulators for unknown linear systems, the learning problem here is posed as a stochastic policy optimization problem which minimizes the expected output prediction error. Bridging together the learning of kalman gain and the optimal control gain, the paper provides an interesting convergence analysis of stochastic gradient descent for the policy optimization problem, and bias-variance error bounds of the learning problem by employing a set of tools from linear systems and stochastic geometry. Strengths: +) The learning of the Kalman gain as a stochastic policy optimization problem which is amendable to results and studies of learning linear quadratic regulators in the literature; +) the dual of learning the Kalman gain and the optimal control gain in the data-driven setting; +) biased gradients and stability constraints are strategically handled when learning the Kalman gain, along with the bias-variance error bounds. Weaknesses: -) The setup is not very practical as the system matrices are assumed perfectlt known but only the covariance matrices are unknown; even in e.g., the stated application aircraft wign dynamics where only approximate models are known, one would assume imperfect knowledge of both the system matrices and covariance matrices; or identifying them via data-driven method first. -) For the learning problem described in lines 106-108, it is unclear whether the problem is well-posed in the sense that given these information (data), wheher one is able to learn the steady -state Kalman gain L_\infinity, or at least how large the horizon T shall be such that one will be able to have a unique L_\infty? -) In (1), are uncorrelated random vectors enough for deriving the optimal Kalman filter without mutually independent random noise vectors? Also in the formulation of (5), what are random quantities and do we have any condition expectations or not? Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: It would be great if the results can be compared with more related works on learning the Kalman filter under different settings; Zheng, Y. et al. Sample complexity of linear quadratic gaussian (LQG) control for output feedback systems. In Learning for dynamics and control (pp. 559-570). PMLR. Zhang, X. et al. Learning the Kalman filter with fine-grained sample complexity. arXiv preprint arXiv:2301.12624. In the behavioral theory literature, results are available for learning the Kalman filter for unknown linear systems by using data, although seemingly in addition to the noisy output data and also the pure state data, in e.g., Liu, W. et al. Learning Robust Data-based LQG Controllers from Noisy Data. arXiv preprint arXiv:2305.01417. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: NA Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate your constructive comments, clarifications, and reference to relevant literature that will indeed improve this manuscript. **Q:** *The setup is not very practical as the system matrices are assumed perfectly known but only the covariance matrices are unknown...* **A:** Please see our response to all. **Q:** *For the learning problem described in lines 106-108, it is unclear...* **A:** Please note that lines 106-108 are not intended as precise mathematical statement of the problem, but as a formal qualitative description: given independent realizations of output data with length $T$, our goal is to learn the steady-state Kalman gain. Our result (formerly Remark 7, now Theorem 1) shows that in order to learn the steady-state Kalman gain up to $\varepsilon$ accuracy, it is sufficient for the time horizon to satisfy $T\geq O(\ln(1/\varepsilon))$. The precise mathematical problem we intend to solve is the steady-state optimization problem (10). The uniqueness of steady-state Kalman filter gain $L_\infty$ is guaranteed under Assumption 2. **Q:** *In (1), are uncorrelated random vectors enough for deriving the optimal Kalman filter...* **A:** Yes, uncorrelated random vectors is enough to establish that Kalman filter provides the best *linear* MSE estimate of the state given the observations (Thm. 2 in Kalman, 1960). The word *linear* was missing in the text, We will add this in the revision. Note that the error analysis also does not require independent noise. **Q:** *Also in the formulation of (5), what are random quantities and do we have any condition expectations or not?* **A:** The expectation in (5) is taken over all the random variables; consisting of the initial state $x_0$, dynamic noise $\xi(t)$, and measurement noise $\omega(t)$ for $t= 0, \cdots, T$. The conditional expectation is not necessary (the estimate is constrained to be measurable with respect to the history of observation). **Q:** It would be great if the results can be compared with more related works on learning the Kalman filter under different settings; **A:** Thanks for pointing us to the relevant references that we missed in our literature survey. We will include these references along with the summary of the following comparison in our revision. (Zheng, Y. et al. ) establishes an end-to-end sample complexity bound on learning a robust LQG controller--establishing a nice trade-off between optimality and robustness. As the system parameters are also unknown, this work only considers the *open-loop stable* systems. While our filtering design problem is based on the knowledge of system parameters, we do not require the open-loop stability assumption and its robustness analysis is part of our future work. Furthermore, the complexity bounds in (Zheng, Y. et al. ) depends on the length of trajectory and scale as $O(1/\sqrt{N})$ in number of trajectories whereas ours does not depend on length and scales as $O(1/N)$. (Zhang, X. et al.) considers the problem of learning the steady-state Kalman gain but in a different setup: The model is assumed to be completely unknown. However, the algorithm requires access to a simulator that provides noisy measurement of the MSE  $\mathbb E[\|X(t)- \hat{X}(t)\|^2]$ which requires generation of ground-truth state trajectories $X(t)$ (see Assumption 3.2 in the reference). The proposed approaches are different: zeroth-order global optimization vs first-order stochastic gradient descent. Sample complexity result of our approach and their approach is similar. However, it is difficult to provide more detailed comparison as explicit dependence of the error terms on problem dimension is not provided. (Liu, W. et al) considers the problem of simultaneously learning the optimal Kalman filter and linear feedback control policy in LQG setup. Their approach involves solving SDP problems using input-state-output trajectories. Their result, for the the case when trajectories involve noise, relies on the assumption that the magnitude of the noise can be made arbitrary small. This is in contrast to our setup where we only assume a bound on the noise level and do not require access to state trajectories. --- Rebuttal Comment 1.1: Title: Thanks for the rebuttal! Comment: Thanks for the effort in addressing my concerns. I have read the rebuttal. I am satisfied with the response and do not have further questions.
Summary: This paper focuses on learning the optimal filtering policy (Kalman gain) in a linear system with known system matrices and unknown covariance matrices of noise. The paper considers a proxy objective to avoid learning with hidden variables and characterizes a dual form of the objective, optimized subsequently using SGD. Convergence analysis and finite-time error bounds are provided. Strengths: originality: studies a traditional Kalman filtering problem with a different setting and objective. The duality theory and the proposed method of optimizing the dual objective seem original. quality: provides detailed theoretical results and analysis. clarity: clearly formulate the setting. The whole picture from the background of Kalman filtering, problem setting, duality theory, to SGD convegence analysis is well organized and easy to follow. Weaknesses: 1. The paper lacks enough empirical experiments to support their idea. The only two figures appear in the supplementary materials, but the linear convergence outside a region near optimal, along with reasons why performances of the $M=20$ and $T=200$ case is different from other settings, is not presented clearly in this paper. It is suggested that: - the figures can be plot in the log scale to illustrate the linear convergence; also it is better to illustrate in the plot the scale of the "no-linear-rate" small region; - additional studies to demonstrate why medium hyperparameters $M=20$ and $T=200$, yield worst/best convergence rate among other hyperparameters (otherwise it is likely to think that the proposed SGD approach is not so robust to hyperparameters); - maybe comparison with some existing methods on real data is preferred. 2. The organization of this paper can be improved. For example, - numerical results can be put in main paper. - from my view, it is more natural to first characterize the bias in estimated gradient (Sec.4.2) and then provide convergence guarantees under such biased gradient (Sec.4.1). Otherwise, it may lead to confusion on the need of convergence analysis under biased estimates (e.g. the appeared $E$ in Lemma 5 & Proposition 2). 3. Some typos: - Eq.(3b) $P(T)$ should be $P(t). Technical Quality: 3 good Clarity: 3 good Questions for Authors: None Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: The main limitation of the proposed approach is that the biased gradient causes the convergence rate to be linear only outside a small region near the true optimal value. It may not seem so preferable if the region is large, thus not ensuring exact convergence to optimal value. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: The authors would like to express their gratitude to this reviewer for their insightful comments that helped us improve the presentation and empirical results of this manuscript. ***Q:** The paper lacks enough empirical experiments to support their idea. The only two figures appear in the supplementary materials, but the linear convergence outside a region near optimal, along with reasons why performances of the $M=20$ and $T=200$ case is different from other settings, is not presented clearly in this paper. It is suggested that: the figures can be plot in the log scale to illustrate the linear convergence; also it is better to illustrate in the plot the scale of the \"no-linear-rate\" small region; additional studies to demonstrate why medium hyperparameters and , yield worst/best convergence rate among other hyperparameters (otherwise it is likely to think that the proposed SGD approach is not so robust to hyperparameters); maybe comparison with some existing methods on real data is preferred.* **A:** Thanks for pointing out the strange nature of the empirical results which made us to look closely at the code and find a mistake in computing the exact steady-state Kalman gain (the multiplication by the $A$ matrix is not included in the code). The mistake produces an error of the order $A - I = O(\Delta t)$ which effected the relative order for different $M$. The corrected figures are produced in the accompanying pdf for the figures, which illustrates the expected order of the error curves and linear convergence regime (the errors are averaged over $50$ simulations). Finally, please note that our contributions are mostly methodological and theoretical: the empirical results are provided for illustration purposes. Extensive numerical experiments for specific applications and comparison with related approaches is the subject of a separate work. ***Q:** The organization of this paper can be improved. For example, numerical results can be put in main paper. From my view, it is more natural to first characterize the bias in estimated gradient (Sec.4.2) and then provide convergence guarantees under such biased gradient (Sec.4.1). Otherwise, it may lead to confusion on the need of convergence analysis under biased estimates (e.g. the appeared in Lemma 5 & Proposition 2).* **A:** Thanks for the suggestion about the organization of the paper. We will move the numerical result to the main paper by moving supporting lemmas to the supplementary material. We will also interchange 4.1 and 4.2 as it fits better to the the flow of the arguments. ***Q:** The main limitation of the proposed approach is that the biased gradient causes the convergence rate to be linear only outside a small region near the true optimal value. It may not seem so preferable if the region is large, thus not ensuring exact convergence to optimal value.* **A:** We like to clarify that the convergence guarantee to a region around the optimal value is due to the finite-length of the data. The region can be made arbitrary small by choosing larger length and larger batch-size. In particular, to achieve $\varepsilon$ error, we only require the length $T\geq O(\ln(1/\varepsilon))$ and $M\geq O(1/\varepsilon)$. See the formal version of Theorem 1 in Response to All. Also, see the newly added Fig 1 (d) illustrating that optimality gap at the final iteration (i.e. the radius of the small neighborhood around optimality) is decaying linearly as a function of trajectory length $T \leq 50$---until the variance error dominates beyond $T=50$. It is clear that, increasing batch size $M$ will allow further decrease of this optimality gap beyond $T=50$.
Summary: This paper studies the learning of optimal steady-state Kalman filter gain for a linear dynamical system with known system matrices but unknown process and measurement noise covariances. In particular, the learning process involves minimizing the prediction error of the observed output using a dataset of independently realized observation sequences. Leveraging the duality between LQR control and Kalman filtering, the authors reformulate this learning problem as synthesizing an optimal control policy for an adjoint system and propose a stochastic gradient descent (SGD) algorithm algorithm to solve it. The authors also provide sample complexity and non-asymptotic error guarantees by conducting a convergence analysis of SGD accounting for biased gradients and stability constraints. In particular, they show that the bias-variance error bounds scale logarithmically in system dimension and that the variance term doe snot change with the horizon. Strengths: I think this paper is well-written. The text is easy to read and and the technical details are mostly neatly presented. The analysis also makes sense although I didn't verify in detail. The results are fairly general and significant as the problem of unknown noise covariances in Kalman filtering should be practically relevant as well. Weaknesses: I don't consider these as weaknesses necessarily but here are a few things I would recommend the authors to consider: 1. The results of remark 7 can be discussed earlier maybe within the informal theorem 1. 2. You might find it useful to adopt $(\kappa,\gamma)-$stability definition by Cohen et al. 2018 in your bounds involving $\sqrt{\rho(A_L)}$, especially in Lemma 6. 3.it wasn't clear to me in the first place why singularity of $H^TH$ requires a significantly different treatment than LQR case. maybe you can clarify this earlier in the text. 4. It would be helpful to discuss possible future directions. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: I have a few questions to the authors out of curiosity. How essential is it to assume bounded noise for propositions 4 and 5? In figure 1a, could you explain why we see that M=10 case seems like the best performing , even better than M=30? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 3 good Contribution: 4 excellent Limitations: The authors already addressed some of the limitations such as the requirement for perfect knowledge of system matrices. I would also consider the bounded noise setting one of the limitations that might be improved in the future works. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: The authors would like express their gratitude to this reviewer for their insightful comments. **Q:** *The results of remark 7 can be discussed earlier maybe within the informal theorem 1.* **A:** We are going to replace Remark 7 with a formal version of Theorem 1 that includes the necessary assumptions and the complexity bounds (See Response to All). To avoid introducing symbols in the introduction, we keep the informal version of Theorem 1 the way it is and refer to its formal version for details to avoid complications. **Q:** *You might find it useful to adopt $(\kappa,\gamma)-$stability definition by Cohen et al. 2018 in your bounds involving $\sqrt{\rho(A_L)}$, especially in Lemma 6.* **A:** Thanks for pointing out the notation convention. Indeed, we can rewrite the bound in Lemma 6 in terms of $(\kappa,\gamma)$ as opposed to spectral radius. We will point this out in our revision. However, we decided to also keep the explicit appearance of spectral radius in the bound to facilitate comparison with the related literature where spectral radius appears in the respective bounds (Fazel et. al. 2018, Tsiamis & Pappas, 2023). **Q:** *It wasn't clear to me in the first place why singularity of $H^\intercal H$ requires a significantly different treatment than LQR case. maybe you can clarify this earlier in the text* **A:** The existing convergence results for the LQR problem rely on the positive-definiteness of the covariance of the initial state. For instance, the constant $\mu = \lambda_{\min}(\Sigma)$ appears in all bounds in (Fazel et. al. 2018). We hint to this difference on line 51 of Introduction and provide more details in Remark 2 after we introduce the dual LQR problem in Proposition 1. We will modify Remark 2 to further clarify the difference with the existing analysis. **Q:** *It would be helpful to discuss possible future directions.* **A:** We will include the following directions for future research: - Single trajectory data: A direction for future research is to study how to adapt the proposed algorithm and error analysis for the setting when a single long trajectory is available as opposed to several independent trajectories of finite length. - In-accurate system matrices: An important research direction is to carry-out a robustness analysis, similar to its LQR dual counterpart, to study the effect of the error in system parameters on the policy learning accuracy. - Nonlinear setting: The ultimate research goal is to use the recently introduced duality in nonlinear filtering (Kim. 2022) as a bridge to bring tools from RL to nonlinear filtering. **Q:** *How essential is it to assume bounded noise for propositions 4 and 5?* **A:** We are using standard Matrix Bernstein Inequality which requires almost sure norm-bounded realization of the random matrices. It is possible to improve this using more advanced results on concentration inequalities which is part of our future research direction. **Q:** *In figure 1a, could you explain why we see that $M=10$ case seems like the best performing , even better than $M=30$?* **A:** This was due to a mistake in our code. Please see the general response and the corrected figures in the accompanying pdf. --- Rebuttal Comment 1.1: Comment: Dear authors, Thank you very much for the time and effort you spend to provide a clear and detailed response to my concerns and questions. I'm convinced by your answers and in favor of maintaining my score.
Rebuttal 1: Rebuttal: We appreciate reviewers' feedback and suggestions that have helped improve the paper. We provide a summary of the response to the main concerns here, followed by individual responses to the reviewers. **Presentation of the technical results:** In order to improve the presentation of the technical result, we will make the following changes to the paper: - Remark 7 will be replaced with the formal statement of Theorem 1. This will include the main result and the necessary assumptions: **Theorem 1.** *Consider the linear system (1) under Assumptions 2 and 4. Suppose the SGD algorithm is implemented with initial stabilizing gain $L_0$, the step-size $\bar\eta \coloneqq \frac{2}{9 \ell(J(L_0))}$, for $k$ iterations, a batch-size of $M$, and data-length $T$. Then, $\forall \varepsilon>0$ and with probability larger than $1-\delta$, $J(L_k) - J(L^\*)\leq \varepsilon$ if $$T \geq O(\ln(\frac{1}{\varepsilon})),\quad M \geq O(\frac{1}{ \varepsilon}\ln(\frac{1}{\delta}) \ln(\ln(\frac{1}{\varepsilon}))), \quad \text{and}\quad k \geq O(\ln(\frac{1}{\varepsilon})).$$* - In the beginning of Section 4, we will give a roadmap to navigate the technical results that concludes the proof of Theorem 1: 1. Section 4.1 is concerned with the convergence analysis of the SGD algorithm under the assumption that oracle gives a gradient estimate that satisfies $\|\nabla\widehat J(L) - \nabla J(L)\|_F \leq s \|\nabla J(L)\|_F + s_0$ whenever $\nabla J(L) \ge \frac{s_0}{\tau}$ for $s,\tau \in (0,1)$, and $s_0>0$. The main result of this section is Thm.2 which concludes the linear convergence of the iterates for sufficiently small $s_0$, i.e. $J(L_k)-J(L^\*)\leq \varepsilon$, if $s_0\leq O(\sqrt{\varepsilon})$ and $k>O(\ln(\frac{1}{\varepsilon}))$. 2. Section 4.2 is concerned with the bias-variance error analysis of the gradient estimate, summarized as the main result in Thm. 3. It gives the sufficient values for the batch-size $M$ and trajectory length $T$ that provides the desired bound on the gradient estimate error required in Thm.2 with arbitrary small $s$ and $s_0$. 3. Combining the results from Thm. 2 and Thm. 3 concludes the main result Thm. 1. A proof sketch is provided at the end of this response. **Explanation of the empirical result:** Thanks for pointing out the strange nature of the empirical results which made us reexamine the code and find a mistake in computing the exact steady-state Kalman gain (the multiplication by the $A$ matrix was not included in the code). The mistake produces an error of the order $A - I = O(\Delta t)$ which effected the relative order for different $M$. The corrected figures are produced in the accompanying pdf, which illustrates the expected order of the error curves and linear convergence regime. We also provide additional figures for the error at the final iteration, as a function of batch-size $M$ and time-horizon $T$. The numerical results serve to illustrate and validate the presented theoretical results. We leave extensive numerical experiments with real data on applications and empirical comparison with existing approaches for a separate work. **limitations of the learning setup:** We agree that the perfect knowledge of the system matrices is a strong assumption. Our perspective is to separate the two problems of identification of system matrices and the learning of the Kalman gain. This is aligned with certain practical considerations as we explain below. The system identification procedure occurs through the application of physical principles and collection of data from experiments in a controlled environment (e.g., in a wind tunnel). However, identifying the noise covariance matrices strongly depends on the operating environment which might be significantly different than the experimental setup. Therefore, it is common in engineering applications to use the learned system matrices and tune the Kalman gain to improve the estimation error. Please see [Ref 1] for the application of this procedure for gust load alleviation on aircraft wings and [Ref 2] for estimation in chemical reactor models. We also emphasize that the assumed learning setup has a rich history in adaptive filtering with numerous references with a recent survey on this topic [Ref 3]. Our plan for future research is to carry-out a robustness analysis, similar to its LQR dual counterpart in [Ref 4, Ref 5], to study the effect of the error in system parameters on the learning accuracy. [Ref 1] Hinson KA, Morgansen KA, Livne E, ``Autocovariance least squares noise covariance estimation for a gust load alleviation test-bed,'' AIAA SCITECH 2022 Forum, 2021. [Ref 2] Odelson BJ, Lutz A, Rawlings JB. ``The autocovariance least-squares method for estimating covariances: application to model-based control of chemical reactors,'' IEEE Trans. Control Syst. Technol., 2006 [Ref 3] L. Zhang, D. Sidoti, A. Bienkowski, K. R. Pattipati, Y. Bar-Shalom, and D. L. Kleinman, 391 ``On the identification of noise covariances and adaptive Kalman filtering: A new look at a 50 392 year-old problem,'' IEEE Access, vol. 8, pp. 59362–59388, 2020 [Ref 4] Safonov, M. G., and W. E. I. Z. H. E. N. G. Wang. ``Singular value properties of LQ regulators,'' IEEE transactions on automatic control 37.8 (1992): 1210-1211. [Ref 5] Chen, Ci, and Anthony Holohan. ``Stability robustness of linear quadratic regulators.'' International Journal of Robust and Nonlinear Control 26.9 (2016): 1817-1824. *Formal Proof of Theorem 1.* According to Thm.3, Assumption 3 holds at each iteration with probability at least $1-\delta$, if $T \geq O(\ln(1/s_0))$ and $M \geq O(\ln(1/\delta)\big/ s_0^2)$. As a result, Thm.2 is valid, hence $J(L_k)-J(L^\*)\leq \varepsilon$, if $s< \frac{1}{4}$, $s_0\leq O(\sqrt{\varepsilon})$ and $k>O(\ln(\frac{1}{\varepsilon}))$. Finally, the claim follows by combining the above bounds and using union bound for computing the failure probability of $k$ iterations.◻ Pdf: /pdf/d88b34f8f50618f7f7efee9497b64e37e008c428.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Streaming algorithms for evaluating noisy judges on unlabeled data - binary classification.
Reject
Summary: The paper considers the problem of estimating the accuracy of noisy judges/classifiers in a streaming fashion, using only unlabeled data. Specifically, the goal is to compute the accuracy of each judge while processing items and the judge predictions for each item as part of a stream, without any associated labels for the items. Strengths: 1. The problem of evaluating noisy judges that is being explored by the paper is an interesting and important one. Weaknesses: 1. The main method presented in this paper (i.e., section 3) is not novel. For example, it can be seen as a special case of the approach presented in *Platanios, E. A., Blum A., and Mitchell T. "Estimating accuracy from unlabeled data" UAI (2014)*, which relaxes the independence assumption (though does not consider the streaming setting) and is also not cited in this paper. In fact, a lot of related work is missing including derivatives of the aforementioned paper (e.g., a direct follow-up in ICML 2016). 2. The premise of section 4 is weak. The idea that the method of section 3 is “self-alarming” because when you have dependent classifiers the accuracy estimates will be invalid is not completely correct. While this may capture some cases, there are still a lot of cases where you can have dependent classifiers, and where there exists a valid solution to the presented system of equations. Thus, I am not convinced by the main claim of this section. 3. The paper considers a streaming setting but it does not provide motivation for it. For example, it was not clear to me why we cannot store the predictions of the classifiers as items are processed in a database and then perform accuracy estimation periodically. If we have 8 classifiers and 2 possible labels, this would require 1MB per 1 million items, which does not seem expensive (and we can also perform random sampling if space becomes an issue). 4. The paper is presented in a manner that is hard to follow and could be significantly improved. I The whole paper would be presented in a simpler and more organized manner, but section 5 was particularly hard to follow without spending a significant amount of time to understand the argument that was being made. 5. The experimental evaluation is a bit lacking in that only toy datasets are being used, there is no explanation for what they are and why they are interesting, and there are very limited results being presented. Ideally, I’d like to see an “Experiments” section in the paper that describes the setup targeted at testing some hypotheses, the datasets, and the evaluation metrics, and then presents and discusses the evaluation results. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: 1. Regarding the majority vote estimator, I don’t understand why you need 2^n variables in your sketch. I think all you need is n+1 variables which are defined as follows: (i) the number of items that have been processed thus far, and (ii) for each classifier, the number of times its predicted label matches the majority vote label. Is my understanding correct or am I missing something? 2. I didn’t understand section 2.1 and I also disagree with the statement in lines 163-164. I can see decision being framed as inference and I am also confident that oftentimes making hard decisions as opposed to keeping soft values around (which I assume is what you are referring to as “decision”) can be helpful. Can you please elaborate and also provide some reference for this claim if I am mistaken? 3. I didn’t understand what you mean by the “principal/agent monitoring paradox”. Can you please explain what that is and also why it is a paradox? Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: There is no discussion of limitations and potential negative social impact in this paper. One recommendation would be to try and think about what the implications could be for say voting systems, and also in situations where these methods are used to evaluate people whose income may depend on this evaluation (e.g., crowdworkers). In this case, the independence assumption being made by the paper may be too strong and yield in incorrect evaluations that could negatively and unfailrly affect the income of those people. Flag For Ethics Review: ['No ethics review needed.'] Rating: 2: Strong Reject: For instance, a paper with major technical flaws, and/or poor evaluation, limited impact, poor reproducibility and mostly unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We disagree strongly that Platonious et al. have already done this work and "better". As discussed in the general rebuttal, they are using the ensemble decisions PLUS additional logical constraints. Nowhere in their paper do they flag that three independent classifiers have a closed algebraic solution. Nor do they offer any theorems that their algorithm (which must be run for each evaluation) converges to the exact answer given its assumptions. In addition, they never discuss how their approach can detect the failure of its own assumptions. Question 1. We did a quick calculation and interpreted your scheme as a linear operation from a 7 dimensional space (in the case of n=3) down to a four dimensional one (your four counters). How is a linear projection from 7 to 4 dimensions invertible? We disagree that n+1 counters can be used to keep a tally of 2^n independent events. Question 2. Question 3. The principal/agent monitoring paradox refers to the fact that the principal is forced to evaluate the work its agents to make sure everything is okay. But the principal delegated the work because they did not want to do it. In our work, the paradox arises because any evaluator that relies on unlabeled data and has no access to the ground truth can never be completely sure that it is carrying out a correct evaluation. The most one could do is ameliorate the paradox as we have done in this paper by discovering that algebraic evaluators have detectable failure modes. But no detector can be perfect, so this self-detection is not perfect. This is observed technically in our work in the well-defined blindspots that cause the evaluator to incorrectly return integer answers when, in fact, the classifiers are correlated. To state it succinctly - algebraic evaluation has no false positive rates on detecting correlation but has false negative cases when asserting that the classifiers are independent. This is not a terrible technical flaw because we can always detect when we are on the blindspot lines in evaluation space. So we can always flag the "on the blindspot" evaluations. For such evaluations, we cannot know if the classifiers are or are not independent.
Summary: This paper introduces a new inferential evaluator for evaluating noisy binary classifiers on unlabeled data in a streaming manner. Specifically, compared to the evaluator based on majority votes, the new evaluator gives a more complete and reasonable modeling of the true label prevalence and each classifier’s accuracy. In addition, the property of the new evaluator is also mathematically discussed, and the relationship between error dependence and the evaluator estimate is empirically discovered through experiments. Strengths: 1. The paper addresses a significant problem in machine learning - evaluating the performance of binary classifiers on unlabeled data. 2. The proposed methods could have wide-ranging applications in various fields where machine learning is used, making the paper highly relevant. 3. The author provides mathematical proof to support the proposed methods and conducts empirical tests on several datasets. 4. The proposed generic framework that is based on algebraic geometry can cast a positive influence on evaluation methods on unlabeled data. 5. The new algebraic evaluation method bypasses the representation and OOD problems in ML. Weaknesses: 1. The paper is very hard to read and lacks the background to help the reviewer understand and improve the reading experience. Moreover, the supplement mentioned in lines 124, 171, 236, and 292 is missing from the paper. There are no detailed proofs for all the theorems. 2. The organization of the paper is confusing. The logic chain of the whole paper needs to be improved, better briefly introduce the outline and main content of each chapter at the beginning. 3. The aiming research problem needs to be explained formally in math language and to be explained clearly with intuitive explanations, better with a toy example or case study. 4. The current limitations of existing research, the proposed solutions (contributions of this paper), and the aiming experimental questions are not clearly listed, making it hard to catch the author's idea. 5. More experiments are needed. There are no experiments supporting that the performance of a majority vote-based evaluator is worse than that of the inferential one. And from the perspective of experiments, the advantage of the new evaluator is not made clear. 6. Only label prevalence is formalized in the paper, while there are no formulas for classifier accuracy. 7. The complexity of the concepts and the heavy use of mathematical proofs might affect their clarity. The author could consider providing more background information, intuitive explanations, or visual aids to improve the paper's accessibility. 8. The experiment part fails to show the superiority of the proposed method compared with other baselines. 9. The equations need to be carefully edited using formal math language. The space should be used for some meaningful and essential equations, not for simple ones such as summation or average operations. 10. Typos: In line 167, "it could have been a βitem, not an αone" should be "it could have been a β item, not an α one". Technical Quality: 1 poor Clarity: 1 poor Questions for Authors: Is it possible to compare the proposed method to other baselines on the same datasets, in order to show the superiority and significance of the new method? Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 1 poor Presentation: 1 poor Contribution: 2 fair Limitations: The paper concludes with a brief discussion of how algebraic stream evaluation can and cannot help when done for safety or economic reasons. Flag For Ethics Review: ['No ethics review needed.'] Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: The experiments where meant to highlight what is unique about our algebraic evaluator - that it can detect when its own evaluation assumptions have failed. Since there is no other published evaluator on unlabeled data that can detect the failures of its own assumptions these experiments cannot have baseline comparisons. The very existence of the experiments and that an evaluator could ever detect the violation of its own assumptions is what makes them significant and, we believe, sufficient proof of the utility of this approach.
Summary: This paper considers the problem of evaluating noisy binary classifiers on unlabeled streaming data. It aims to estimate the prevalence of the labels and the accuracy of each classifier on them, given a data sketch of label predictions by the members of an ensemble of noisy binary classifiers. The authors propose two algebraic evaluators based on the assumption of error-independent classifiers: the first is based on an additional assumption of majority voting, and the second is fully inferential and is guaranteed to contain the true evaluation point. Strengths: - The results of this paper seem to be well supported by the rigorous mathematical analysis as well as empirically demonstrated on three benchmark datasets. Weaknesses: - The independence assumption may not be satisfied in practice, especially when the ensemble of classifiers consists of different models trained on the same or overlapping datasets, or even the same model but trained for different durations. - The significance of this paper is unclear. It would be better for the authors to provide some concrete real-world examples that fit the problem setting of this paper and explain the possible uses of the quantities desired to be estimated in these examples. - The proposed evaluators may be sensitive to noise or corruption in the data sketch. The solutions of the algebraic equations will change if the data sketch is not perfectly recorded, leading to mistakes in distinguishing between independent and correlated evaluations. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: Please see Weaknesses. Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: The authors has discussed the limitations of this paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: 1. We agree with the reviewer that classifiers may not be independent. But as asserted in the paper, we consider the operational scenario where one engineers an evaluation ensemble that is nearly independent. What remains then, for safety reasons, is the ability to detect when the classifiers are not independent enough. This is why the paper focuses on the one novel aspect of our algebraic evaluator - that it can detect when the classifiers are not independent. 2. To make algebraic evaluation as proposed here fully operational, one would need to have the formulas that would allow you to measure the small correlations between nearly independent classifiers. 3. We are not sure why the mushroom and adult datasets are not examples of "real-world" data that illustrate concretely how to use algebraic evaluation. Since this is the foundational theoretical paper on this approach, we believed that focusing on just evaluation and not the implications of any particular evaluation was proper. 4. Yes, the proposed algorithm is not designed for adversarial situations where someone corrupts the input to the formulas. Most papers published in NeurIPS are also not immune to adversarial attacks. We do agree that the higher order polynomials involved in this work should be expected to have poor condition numbers in certain settings. Detailing these numerical sensitivities will be crucial in proving the reliability of future proposed algebraic formulas.
Summary: The paper addresses making decisions based on the outputs of three binary classifiers. More precisely, it focuses on evaluating the performances of noisy classifiers. It considers majority voting on one hand, and a proposed evaluation scheme based on the classifiers' accuracies. The paper establishes several theorems so as to demonstrate the superiority of the second (proposed) evaluation scheme. Then, experiments are conducted to test the ability of the proposal to avoid making decisions in problematic situations—e.g., correlated classifiers. Strengths: I see no particular strength in the paper that would mitigate its flaws. Weaknesses: The proposal suffers from several major flaws. First of all, it is badly written. The problem is not clearly stated. The mathematical objects (typically, the prevalence of the labels, and the label accuracies) are not properly introduced and defined. Some key notions (such as the "evaluation variety", the precise definition of correlated classifiers, among others) are also not defined. There are frequent references in the text to notions which have not been exposed yet (e.g., "the evaluators for binary classifiers" in the introduction, Theorems 1 and 2 in the introduction as well, Theorem 3 in Section 1.2). Besides, the paper ignores a large amount of literature. The problem addressed has connections with computational social choice (voting schemes), of which some works are mentioned. But it also relates to classifier combination (boosting, error-correcting output codes, weighted averaging, racing algorithms, etc): the problem of evaluating the performance of an ensemble has been addressed in a number of works which are ignored here. Last, but not least, the paper focuses on a very specific case—the ensemble has only three classifiers. This is very restrictive and such ensembles are hardly used in practice. Some claims are not supported—e.g., in Section 1.1, "Seemingly correct estimates are estimated values that seem to be correct because they have this real, integer ratio form. Estimates that do not have this form are obviously incorrect." This is not the case of the $F_\beta$ measure, for instance. Technical Quality: 2 fair Clarity: 1 poor Questions for Authors: How would your approach compare to classifier combination techniques where classifiers are combined based on their accuracy (e.g. racing algorithms or boosting) ? How could the approach be generalized to more than 3 classifiers in the ensemble, or to multi-class classification problems ? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 1 poor Contribution: 1 poor Limitations: The authors have addressed the limitations of their approach, but in my opinion only in a restricted way. They have not addressed the potential negative societal impact of their work, but I do not think that this is crucial here. Flag For Ethics Review: ['No ethics review needed.'] Rating: 2: Strong Reject: For instance, a paper with major technical flaws, and/or poor evaluation, limited impact, poor reproducibility and mostly unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: 1. We agree with the reviewer that this work has large applicability to the literature of social choice. We are also aware of the extensive work that has been done in other fields with this universal problem of principal/agent monitoring. That is precisely why we chose to frame the concerns of the paper within the principal/agent language. Given the size of the paper, we did not feel the difficulty of presenting this novel mathematical techniques gave us the luxury to discuss these wider concerns. We are also aware that there are many ensemble algorithms for decisions (such as boosting). How are these relevant to a paper that discusses using ensembles for evaluation, not decision? 2. The paper restricted itself to three classifiers because those are the only ones for which we have exact theorems. Research on ensembles of more than 3 classifiers remains open. We cannot tell, given our current knowledge, when the number of classifiers will become too large to solve. But note that if the purpose is just evaluation, and your classifiers are error independent, it does not matter how many you have. When one considers the technical debt of maintaining, and running multiple classifiers for the purpose of evaluation, three or four classifiers may be just right for a specific application. 3. Since this paper is concerned with evaluations, not combining the classifiers decisions to make a final, grand decision on each item we do not have any guidance on how to best combine their decisions. If one is concerned about the safety of one's decisions, it seems that the work on risk minimization by Hendrycks et al. would be the approach to take.
Rebuttal 1: Rebuttal: This section will address two criticisms shared by two or more reviewers - 1. that the paper is not novel and the problem was treated already in Platonius et al. (2014) 2. the experimental results are weak and like baseline comparisons. The purpose of the paper was to devise an algebraic evaluator that has no free parameters, does not use probability and minimizes its evaluation assumptions. The utility of devising such an algorithm is that it would contribute to AI safety via the concept of defense in depth - providing another algorithm that can be used in conjunction with more complex and specific evaluators to provide multiple assessments of the performance of an AI system in production or in the field. The catch-22 or paradox of evaluation on unlabeled data is that the evaluator itself is now suspect since it has to make assumptions. Any real evaluator, whether algebraic or probabilistic, will have them. How do we know these assumptions are correct during any particular evaluation on unlabeled data? Until this paper, no published paper has been able to show how this can be addressed. No paper in evaluation on unlabeled data known to us, not even Platonius et al. discuss how the evaluator can detect its own failure. No other paper in evaluation has shown that there is a closed, exact algebraic solution for three independent classifiers. This is not mentioned in Platonius et al or any other paper. We do not view AI safety algorithm development as a horse race where one best algorithm should prevail. Rather, safety in depth requires that we consider multiple assesors to be truly safe. The algebraic approach here is not in competition with the approaches of Platonius et al or any of the ones we cite. Indeed, one of the technical achievements of the paper (not remarked on by any of the reviewers) is that we were able to solve the fully correlated 3-classifier Groebner basis. This allowed us to discover a n+1 dimensional surface inside the 2n+1 dimensional space of the unknown basic performance statistics. This is an immediate improvement that can be used by the methods of Platonious and others since it immediately restricts the possible correct answers. Once one realizes that no other method has failure modes, the logic of the experiments presented should be clearer. A couple of reviewers criticized the paper becaused it lacked comparison baselines. How are we to tell how well algebraic evaluators are doing without these baselines? But no other method has a way of discovering the failure of its own assumptions so there can be no comparisons. Algebraic evaluators are unique in this feature and this is one of the major points of this paper and the subject of a theorem in the paper - we can detect, with no false positive rates, when the classifiers during a test did not act independently. But because no detector (even one of assumption failures) can be perfect, it is impossible to certify independence along well-designated blind spots that we detail in the paper. AI safety cannot be predicated on algorithms that always succeed in the evaluations. This is impossible to attain. Tests will fail for reasons beyond the control of any evaluation algorithm. We are better off when we can detect such failures in a principled way that does not have fantastical claims that it will always work or never fail its own verification. We share the reviewers wish for the completion of the full evaluation loop with this algebraic approach. Finding the unique ability to detects its own failure modes is useful but not sufficient. We need to develop the equations that will be able to detect small amounts of correlation when a system hovers around error independence. That remains an open reseach question we are working to address. In the appendix we showed three evaluations corresponding to the three datasets we used. Implicit in our work is the connection between failure modes and the size of the error correlations between the classifiers. Another open research question is if we can derive bounds on the correlation values when a failure occurs. For example, we have observed experimentally that correlations above 10% will trigger imaginary valued estimates. The attached PDF combines the experiments in the paper with the evaluations to show qualitatively how the evaluations improve as a dataset has less failure modes. This qualitative impression needs to be sharpened by developing numerical bounds that relate failure rates to correlation values. Finally, a couple of reviewers viewed the paper's focus on just three classifiers as a weakness and not as a sign of the early stages of this research program. The strength of the paper lies in the exact theorems it is able to provide. These depend on the two major technical achievements of the paper - 1. the construction of an exact, algebraic solution for three independent classifiers and 2. the complete solution of the Groebner basis for three arbitrarily correlated classifiers. If the classifiers are independent, achievement 1 means that we have solved the evaluation problem for n >= 3. Just take any trio of classifiers and apply the exact algebraic solution. We are currently working on solving the correlated Groebner basis for four classifiers. This remains an open problem. We may not have solved all the open research problems that this purely algebraic approach provides for those concerned by AI safety but we believe that its current achivements - detection of its own independence assumption and the (n+1) subsurface in the 2n+1 evaluation space are enough to complement other evaluation approaches.
NeurIPS_2023_submissions_huggingface
2,023
Summary: This paper considers the problem of evaluating an ensemble of binary classifiers on unlabeled data in a streaming setting. The authors first describe a baseline which treats the majority vote as the correct label and evaluate each classifier accordingly. Then they propose an evaluator based on an assumption that the classifiers are independent. The algebraic expression for this evaluator should return rational numbers if the assumption holds (as they should correspond to ratios of integer counts). Thus, this evaluator has failure modes that can be detected clearly unlike the majority-voting baseline that may return incorrect but seemingly sensible values. Strengths: While I am not an expert in evaluation using unlabeled data and cannot speak definitively, the proposed algebraic evaluation and characterizing failure modes by algebraic failures seem creative and novel. Weaknesses: I found the paper overall quite difficult to follow, and thus my assessment of its technical contributions may be limited. More thorough motivation and background on the problem setting (e.g. using real-world applications), as well as careful characterization of the proposed algebraic evaluator in contrast with existing approaches, would help make the paper much more approachable. I also had a hard time following most of Section 1 (especially 1.2) without the technical details in Section 3. The paper is also missing some related work discussion, and its contributions with respect to prior work is not very clear. I struggled to see the connection to the works mentioned in the first paragraph of Section 1.3. Another work that appears very relevant is [1]; how does this paper relate to their approach? Throughout the paper, only the setting with three binary classifiers is considered. A more general formulation may be helpful. As far as I can tell, this approach would scale exponentially in the number of classifiers which could limit its impact in practical settings. Empirical evaluation was limited only to analyzing the failure rates, and there were no experiments on how well the proposed approach performs as an evaluator (i.e., how close are the error rate estimates to the true error rates?). [1] Platanios, Emmanouil Antonios, Avrim Blum, and Tom Mitchell. "Estimating Accuracy from Unlabeled Data." 2014. Technical Quality: 2 fair Clarity: 1 poor Questions for Authors: Please see above for the main questions and suggestions. As a minor suggestion, I think Theorem 1 would be very intuitive to see using probabilities. It involves the probability of true label being $\alpha$ and the conditional probabilities of each outcome given the true label being $\alpha$ or $\beta$; the independence assumption allows turning the probability of a joint outcome into a product of probabilities for each classifier’s outcome. Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 1 poor Contribution: 2 fair Limitations: Overall, yes. One limitation that was not discussed is that the approach seems to scale exponentially in the number of classifiers. Flag For Ethics Review: ['No ethics review needed.'] Rating: 2: Strong Reject: For instance, a paper with major technical flaws, and/or poor evaluation, limited impact, poor reproducibility and mostly unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: 1. Relation of Platonious et al paper to this work. Their work is concerned with estimating the accuracy of multi-class classifiers whenever we have their decisions AND additional logical relations that can serve as ground truth constraints. Since whatever logical relations are chosen are specific to the particular evaluation (for example, the NELL-2 logical constraints in their Fig. 2), their work cannot be considered having universal application. By avoiding all possible constraints, our evaluator is truly universal in application. Furthermore, they never identify or assert that the case of three independent classifiers is solvable in a closed algebraic form. Their proposal is an algorithm that must be run for each evaluation. No such step is needed if one has a closed algebraic estimate. 2. The paper focused on three classifiers exclusively for succinctness, because we have exact theoretical proofs for such cases having solved the Groebner basis for three correlated and independent classifiers. We are currently working on a general solution to the Groebner basis for 4 correlated classifiers but have not succeeded. We know of no other work of evaluation on unlabeled data that can claim these exact solutions and use them as we have done here - to detect when the evaluation itself is failing. 3. We focused on measuring recognizable failure rates precisely because that is what is most novel about our approach. The appendix contains three evaluations for the datasets but it was not our main focus as you observe. We hope in future work to finish the research program started here and develop, perhaps with Taylor series expansions of the correlated Groebner basis, formulas that can give reasonable estimates when classifiers are weakly correlated. 4. Since the paper demonstrates that we can get exact solutions for the Groebner basis of independent and correlated classifiers in the case of three, we fail to why "exponential scaling" is a problem. Why do we need to use more than three or four classifiers? Are algorithms that do not scale well at large n useless at small n? Quicksort is not. We expect our exact solution will also be useful in many practical applications. --- Rebuttal Comment 1.1: Comment: Thank you for your response. Regarding "why do we need to use more than three or four classifiers", I also didn't see any evidence in the paper or the author response to convince me otherwise. For general applicability, we would consider arbitrary numbers of classifiers, and as far as I can tell, the direct extension of the proposed method would scale exponentially in the number of classifiers. I do not see how quicksort algorithm is relevant here: it scales at worst quadratically, not exponentially. --- Reply to Comment 1.1.1: Title: Exponential nature of the problem versus our non-exponential solutions Comment: Perhaps if we ground the discussion of exponential complexity in specific areas of the paper we can clarify our position that our proposed methods are not exponential and can be used with arbitrary number of classifiers. The problem of considering the voting patterns of an ensemble of binary classifiers is exponential by its very nature. There are 2^n possible voting patterns. And, as one of the theorems in the paper states, completely describing the data sketch of arbitrarily correlated classifiers requires that we know 2^(n+1)-1 sample statistics. But problems with exponential structures have non-exponential solutions. This paper shows two examples of this non-exponential solution to the problem of evaluating classifiers. The first one uses the independent solution for three classifiers to detect if they are so correlated that they return nonsensical solutions - to apply this method to ANY number of classifiers n>3 just requires that we compute the formula for all possible trios in an ensemble of n. This scales as n^3, a decidedly non-exponential solution to the problem of detecting highly correlated classifiers. In addition, we detail a universal (n+1) surface that applies to ANY ensembles of classifiers (correlated or not) in the (2n+1) evaluation space of the base statistics that only requires considering all possible pairs in an ensemble of n classifiers. This scales as n^2. Exactly as the Platanios agreement equations. Two methods to understand the evaluation of binary classifiers that are not, as the reviewer claims, scaling exponentially when applied to an arbitrary number of classifiers. It is not our fault that the nature of the problem of specifying the sample statistics of an evaluation by n binary classifiers requires 2^(n+1)-1. It is what it is. Any approach that completely describes the sample statistics of binary classifiers has to contend with that. This paper shows that the exponential nature of the problem is NOT a problem when it comes to completely solving independent classifiers. The exact solution shows that n=3 is SUFFICIENT. Absent some argument by the reviewer of why the exponential nature of the problem makes it NECESSARY to only consider exponentially hard solutions or that not doing so negates or invalidates the methods we have presented here, we really do not see the practical blocks the reviewer is arguing for. The correlation coefficients scale as 1/n. Since the only practical reason to have ensembles is to have them nearly independent, we see no reason why weakly correlated classifiers will not, in some future paper, be shown to be practically evaluated by just considering pair correlations or even triple ones. And the practical utility of this is no different than when one uses a finite number of terms in a Fourier series to approximate a function within a required error margin, or just use the first few terms in a Taylor series expansion.
null
null
null
null
null
null
New Bounds for Hyperparameter Tuning of Regression Problems Across Instances
Accept (poster)
Summary: The authors tackle the problem of hyperparameter tuning across problem instances, as proposed by Balcan et al. (2022). They propose three novel learning guarantees regarding the sample complexity of tuning regularization parameters: i) an improved upper bound on the pseudo-dimension for elastic net; ii) a matching lower bound in the same setting; iii) a bound on the pseudo-dimension and on the generalization error for logistic regression. Strengths: The paper presents new results, which either improve the state-of-the-art (i) or pave a new way (ii and iii) in analyzing common learning techniques. I think these results are topical and interesting for the community. Weaknesses: As an unschooled researcher regarding the theory of hyperparameter tuning across instances, I found the paper not that easy to read. I particular, I missed the target the authors aimed at (it is stated at the end), which would help me to understand. Here are some remarks which may help in improving the paper: 1) Some elementary definitions are missing and curb the reader understanding (particularly the pseudo-dimension, the Rademacher complexity and the ERM principle in Theorem 4.5). Also, a comment for Theorem 3.1 would be appreciated. As for me, a simple improvement of the paper would be to state informally a kind of Theorem 4.4 at the beginning, such that the reader is more able to understand Sections 3 and 4. 2) Minor comments: “optimization problem 1” should have capital letters, a “be” is missing Line 188, an “of” is missing Line 191, “an GJ algorithm” in the caption of Figure 1 should be “a GJ algorithm”, “Roth et al. ([3])” Line 251 should be “Roth [3]”, increasingness of $R(||\theta||)$ would be better place before stating the representer expansion, the update of $\beta_t^{(\epsilon)}$ in Algorithm 1 should be stored in $\beta_{t+1}^{(\epsilon)}$ instead of $\beta^{(\epsilon)}(\lambda)$, moreover it involves the difference of two vectors of different sizes. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: 1) Can the authors define formally the probability space used for the problems (Line 117)? It is a bit disturbing to manipulate objects of varying size. 2) Should it be $h_{b_y}$ and $g^{(1)}(y)$ in Lines 183-184? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: Limitations are not addressed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for constructive feedback and suggestions. We really appreciate that the reviewer acknowledges the theoretical contributions of our paper and we will address the reviewer’s concerns as below. In particular, we will state the target early on in the paper for improving clarity. **Typos in Lines 183-184, and minor comments**: thank you for pointing them out, we will fix those typos in the revised draft. **Additional comments for Theorem 3.1, and 4.4**: thank you for your suggestion. We will add comments and further discussion before the statements of those results in the revised draft for a better presentation. **Adding elementary definitions and results**: unfortunately, due to the space limitation, we could only add some of the elementary definitions in the Appendix (Appendix A). We will try our best to use the additional page in the camera-ready version to integrate the fundamental notions in the main body. **Concerns about varying sizes of problem instances**: nice catch! It is true that this setting allows the number of train ($m$) and validation ($m’$) samples in each problem instance $P$ to vary. It does not even require the feature set across the different problem instances is the same: different problem instances might have different feature sets or even different numbers of features. This makes the settings incredibly general and can handle feature reset, as mentioned in [1]. As mentioned in our work, given a tuple $(m\_i, m’\_i, p\_i)$ of number of train samples, number of validation samples, and number of features, we denote the set of problem instances of that shape $\mathcal{R}\_{m\_i, m’\_i, p\_i}$. Assume that for any problem instance, the validation set can only take at most $m$ samples, and have at most $p$ features, the problem space can be defined as a finite union $\Pi_{m,p} = \cup_{m\_1 \geq 0, m\_2 \leq m, p\_1 \leq p}\mathcal{R}\_{m\_1, m\_2, p\_1}$. Then, the problem distribution $\mathcal{D}$ is simply some unknown distribution over $\Pi_{m, p}$. We will elaborate on this point more carefully in the revised draft. **References:** [1] Maria-Florina Balcan, Mikhail Khodak, Dravyansh Sharma, Ameet Talwalkar. Provably Tuning Elastic-Net Across Instances. NeurIPS’23 --- Rebuttal Comment 1.1: Comment: I would like to thank the authors for answering my concern regarding the clarity of the paper, and for accepting to improve it.
Summary: The paper analyses the complexity of hypotheses classes where the hyperparameter of logistic regression and linear regression are tuned, in the setting where multiple datasets are available. Strengths: The paper derives tighter upperbounds for the elastic net setting and proves a matching lowerbound, which are extended to KRR, for the setting where regularization parameters are learned from problem instances (e.g. metalearning). The regularized logistic regression setting is tackled by approximating the function class. Weaknesses: 1 While I am quite familiar with learning theory (e.g. VC bounds, PAC, pseudo-dimension) I really have a hard time grasping the problem setting. I had to consult [15] and [27] for that. I think the paper could be greatly improved by clarifying the problem setting in more detail, with some example settings machine learners will be familiar with. E.g. the example of cross validation of [15]. It would also be good to clarify generally what are the inputs to the learner; it is still a bit unclear to me. So are [X_train, y_train, X_test] the inputs? And the output is $\lambda$? The strange thing is, is that the hypothesis seems to take $\lambda$ as input (definition of h), while the $\lambda$ seems the thing we want to learn. 2 Furthermore, in line 124 it is stated "the learning algorithm in this scenario has been mentioned in [15], which is based on simulated annealing" this seems wrong, as the learner in that work [15] is operating in the online learning setting, while the current work seems more to focus on the statistical learning setting. Shouldn't instead be the ERM learner or CV learner of [15] be mentioned? 3 Generally, I think more examples are necessary to grasp the main point of the setting. It would also be clear to further clarify why this is an important problem to study; for which learners can we now derive theoretical guarantees? 4 Once I had clarified the setting I had no time to read the paper furthermore in detail, so I cannot judge the other aspects in more detail. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: see above Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 3 good Contribution: 3 good Limitations: A challenge is to give a tighter PAC bound for the 01 loss for RLR Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for constructive feedback. We really appreciate that the reviewer acknowledges the contribution of our work and also thank them for their time understanding our work. We will take the reviewer’s comments into account for improving the readability of the paper and use the extra page in the camera-ready version to provide additional illustrations and discussion explaining our setting (see also, attached PDF). **Improving clarification of the setting**: We will include examples and illustrations to further clarify our setting. For simplicity, consider the LASSO regression problem: the inputs of the learner are problem instances $P$, which are specified by the tuple $P = (X, y, X\_\text{val}, y\_\text{val})$. Given a problem instance $P$ and a regularization parameter $\lambda > 0$, we can define the loss function $h(P; \lambda) = \frac{1}{2} \|\|y\_\text{val} - X\_\text{val} \hat{\beta}\|\|\_2^2$, where $\hat{\beta} = \text{argmin}\_{\beta}\frac{1}{2}\|\|y - X\beta\|\|_2^2+\lambda\|\|\beta\|\|\_1$, and our goal is to learn a good value $\lambda$. So the hypothesis set is the family of regression algorithms $h(\cdot; \lambda)$ parameterized by $\lambda$ (not taking a fixed $\lambda$ as an input), and our goal is to learn a good value $\lambda$ corresponding to a good regularization for typical problem instances $P$. We will emphasize and illustrate this point to clarify the setting in the revised draft. We agree that including the example of cross-validation in [1] helps clarify the settings of this work and we will include it in the final version of the paper. In particular, $(X_\text{val}, y_\text{val})$ could correspond to validation splits from a fixed dataset to capture usual cross-validation. The setting of course is more general and captures multi-task learning as well where the goal is to learn a common hyperparameter that works well across related tasks. **Comparison with prior work [1]**: While [1] considers both the statistical learning setting and the online setting, we mainly focus on the statistical learning setting improving the results of [1] for the elastic net regression in this setting to obtain asymptotically tight bounds on the pseudo-dimension. We also consider logistic regression which involves technical novelty as no closed-form solution is known. **More discussion on the importance of this line of work**: the problem of tuning regularization parameters in regularized linear models has always been a fundamental issue: choosing inappropriate parameters might cause the model to underfit the data or impair the effect of regularization. In this work, we study a variant of the tuning regularization parameter problem, which involves tuning across multiple problem instances from the same underlying problem distribution. This setting is general and can handle practical scenarios, including the case where the feature set of the problem instances varies or when regression instances from the same domain need to be solved repeatedly. Hence, theoretical analysis for this problem is important and worth exploring. **Concern about the statement in Line 124**: Thank you for correcting this. It is true that the ERM learner should be mentioned instead. We hope that our answers satisfy the reviewer. If any further clarification is required, please let us know. **References:** [1] Maria-Florina Balcan, Mikhail Khodak, Dravyansh Sharma, and Ameet Talwalkar, Provably Tuning Elastic-Net Across Instances, NeurIPS’23.
Summary: The main idea of this paper is to address the challenge of tuning regularization coefficients in regression models with provable guarantees across problem instances. The authors investigate the sample complexity of tuning regularization parameters in linear and logistic regressions under l1 and l2 constraints in the data-driven setting. They provide new upper bounds for the pseudo-dimension of the validation loss function class, which significantly improves the existing results on the problem. Additionally, the paper introduces a new approach for studying the learning guarantee in logistic regression regularization coefficients. Overall, the paper aims to contribute to the literature by providing improved bounds and advancements in the field of hyperparameter tuning for regression problems. Strengths: 1. Improved Bounds: One strength of this paper is that it provides improved upper bounds for the pseudo-dimension of the validation loss function class in linear and logistic regressions. These improved bounds contribute to a better understanding of the sample complexity of tuning regularization parameters in regression models. 2. Generality and Applicability: The proposed approach for analyzing the approximation of the original validation loss function class is not limited to regularized logistic regression but can be extended to a wide range of other problems. This demonstrates the generality and applicability of the approach, making it a valuable contribution to the field. Weaknesses: 1. Lack of Experimental Validation: The paper focuses on theoretical analysis and bounds but does not provide experimental validation or empirical results to support the proposed approach. Including experimental validation would have added practical relevance and strengthened the paper's findings by demonstrating the effectiveness of the approach in real-world scenarios. 2. Lack of Clarity and Visual Support: One weakness of this paper is the insufficient clarity in explaining the problem it addresses. The excessive repetition of formulas and the absence of visual aids, such as graphs or charts, make it difficult for readers to understand the main ideas and practical implications of the research. The lengthy theoretical derivations, while comprehensive, may hinder the overall comprehension of the paper's intended message. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Is it possible to provide any experimental study? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for providing constructive feedback. **Clarity and visual support**: We will take your comment into account for the camera-ready version. We already have a figure illustrating the computation of the GJ algorithm (Figure 1) and we are happy to use the additional page in the camera ready version to provide additional illustrations and figures explaining our setting. **Experimental Validation**: Our theory gives bounds on the sample complexity of the ERM and we view our primary contribution as theoretical – including tight bounds improving over previous best-known bounds, and analysis for logistic regression where no closed-form solution is known – we leave the extension of the ideas to practical methods in more application-oriented research for future work.
Summary: This paper studies the sample complexity of tuning regularization parameters in linear and logistic regressions under $\ell_1$ and $\ell_2$ constraints in the data-driven setting. Theoretically, it provides a new bound for the pseudo-dimension of the validation loss function class, which significantly improves the best-known results on the problem. Besides, it provides the matching lower bound. Moreover, it also provides a new bound for tuning the regularization parameters of logistic regression, which previous work cannot do. Strengths: First of all, I have to admit that I am not an expert in the area of data-driven algorithm design and may miss some related work. Overall, I think this is a solid theoretical work. - Overall, this paper is well-written and discussions about the comparisons with the related work are clear. - The considered problem (e.g., hyperparameter tuning or model selection) is important for the machine learning community. - The theoretical results seem right although I have not carefully checked the proofs line-by-line. - Theoretically, the obtained upper bound is tight since the (nearly) matched lower bound is provided. - In contrast with previous work, it is good to provide the analysis for the logistic regression as it might provide techniques to analyze more general settings with non-closed solution forms. Weaknesses: - Although this is a pure theory work, it would be better to add experimental results to illustrate its applications in practice. - While the considered setting is the hyperparameter tuning across instances, more discussions can be added to illustrate its practicality over previous work with a single instance. Minor issues: typos: - In lines 503-507 (i.e., Lemma A.1), it is not consistent about the expression of non-increasing (or decreasing) and non-decreasing (or increasing). Technical Quality: 3 good Clarity: 3 good Questions for Authors: None Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Please see the weakness part. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the thoughtful comments, and for appreciating the theoretical contributions of our work. **Typos in Lines 503-507**: Thank you for pointing out. We will fix the typos in the revised manuscript. **Concerning practicality over single instance work**: In most applications, one needs to solve multiple instances of a regression problem repeatedly, and doing a multi-fold cross-validation on each problem instance can be computationally expensive, Furthermore, collecting all the problem instances to build a single super-instance may not work as examples across instances may not be i.i.d., and moreover may be infeasible to compute as well. We will emphasize this difference in the camera-ready version of the paper.
Rebuttal 1: Rebuttal: We thank all the reviewers for their thoughtful comments and suggestions. We are glad that all of the reviewers appreciate our theoretical contribution. The main concerns are the clarity of the settings and more visual supports, which we aim to improve by providing figures describing key concepts of our work. We attached the figures in this rebuttal as a demonstration. Thank you again for your hard work and feel free to let us know if there is any other concern. Pdf: /pdf/8dee7207b432b58dc232e1de144908f2c4e907b5.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Rethinking Gauss-Newton for learning over-parameterized models
Accept (poster)
Summary: This paper studies applying the Gauss-Newton method to train the neural network to solve a regression problem in the NTK and mean-field regime. The paper establishes a linear convergence rate on Gauss-Newton in this setting. The authors further provide experiments in the teacher-student network setting to study the effect of initialization and step size on convergence and generalization. Strengths: The presentation of this paper is quite good. All the proofs and theorems and experimental results are well-documented. The proofs and experiment results in the appendix are very nicely structured and organized, making it very clear for the reviewer to understand what the authors are doing. Weaknesses: The major weaknesses of this work is on its contribution. First of all, the Gauss-Newton method is a well-known classical method in convex optimization to achieve a faster super-linear convergence rate than first-order methods such as gradient descent which can only achieve linear convergence rate. Therefore, it is natural to apply this method to improve the optimization on training neural network. However, the challenging problem in training neural network is due to its non-convexity. The authors consider the NTK regime and mean-field regime. It is well-known that the optimization problem in the NTK regime is strongly-convex-like. Therefore, the results the authors established in the paper is non-surprising and further, proposition 1 is only able to establish a linear convergence rate whereas Gauss-Newton is able to achieve locally super-linear convergence rate in the classical convex setting, with some additional assumptions needed to be made. Note that if Gauss-Newton is only able to achieve linear convergence, then the advantage of Gauss-Newton is non-existent over gradient descent since gradient descent can also converge linearly and the cost per-step is much smaller than Gauss-Newton. Also, the goal of optimization in deep learning is for the model to generalize, and it is not clear to me how the Gauss-Newton method can affect the model's generalization. Therefore, currently, I am issuing a rejection for this submission. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Due to the current scale of NeurIPS and the workload on the reviewer, I am unable to go through the appendix in detail. The authors claim their results also holds for the mean-field regime and the proofs in the appendix seems to only address the NTK regime. I encourage the authors to clarify their results. I think for the mean-field regime the authors may have made some assumptions like the initial point is close to the true minimizer which can circumvent the problem of dealing with non-convexity in the mean-field regime. However, this, again, returns back to the classical convex regime. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 1 poor Limitations: The authors discussed the limitations in the conclusion section. Flag For Ethics Review: ['No ethics review needed.'] Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your review. We believe the following clarifications should address all your concerns about the strength of our contribution. If you still have concerns, we are more than happy if you could kindly share them as this would help us improve this work. 1. **Convergence:** The non-convexity and degeneracy of the loss introduces challenges do not appear when studying GN for strongly convex objectives. This is apparent in the proofs which do not rely on classical convexity arguments. Below, we discuss the relevance of the result in terms of linear vs super-linear rate and mean-field limit. - “Linear vs super-linear rate”: The linear rates we provide are general as they include both GN (when the matrix H is the hessian) and natural gradient (when H is the identity). The super-linear rates are only possible when choosing H to be the hessian and are only local. Moreover, Natural gradients are still useful and extensively studied despite their linear rate (see next point). Improved conditioning: the linear rate for GN still has much better conditioning than gradient descent which justifies its advantage over gradient descent even if the cost per iteration is higher. Our result highlights precisely this. - Mean-field limit: Indeed, the convergence result assumes the initial point is close to optimality. This is stated in Proposition 2 of the main text: L215-216. By mean-field limit, we mean using a scaling of $1/M$ instead of $1/\sqrt(M)$ where $M$ is the number of particles (L140). Both NTK and mean-field are captured in proposition 2. That being said, we clarify that this does not recover the classical convex regime for two reasons: - The loss is not convex even near the local minimizer: this was shown in Liu et al 2021. Hence, one can’t use convexity arguments in parameter space. - The pre-conditioner in GN can become singular and make the dynamical system diverge (also called finite-time blow-up). This never happens in the strongly convex case where the pre-conditioner has a lower-bounded singular value. This is precisely what makes the analysis more challenging than the strongly convex case. 2. **Generalization of GN:** These are precisely the empirical contributions of the paper: - GN generalizes better with smaller step sizes and exhibits a phenomenon of “hidden learning”: Both train and test losses can remain large when using small step sizes because the linear layer is not well fitted, still the hidden features keep improving. This ultimately yields better generalization after fitting the linear layer alone (Figures 1 and 2). - We show that using a larger step size for GN does not generalize well despite being faster at optimizing the training objective: This challenges the common belief of using a larger step size for GN. - We show empirically that GN can generalize well, even better than GD. This challenges a common belief that GN does not find solutions that generalize well. --- Rebuttal Comment 1.1: Title: We would be grateful if you engage with us during the discussion period Comment: Dear reviewer, We thank you for your time. Since the authors/reviewers discussion period is still ongoing, we would be grateful if you give us the opportunity to engage in a constructive discussion about our submission. We are grateful that two reviewers increased their scores based on the constructive discussions we had with them, including c9sb who now recommends an accept with a score of 7. We hope you can give us a chance to clarify any concerns you might have. Our contributions are twofold: - A theoretical result on global convergence of GN - An empirical study of the generalisation of GN From your review, we understand you had concerns only about the global convergence result which we believe we addressed as follow: - **Strength of the result: Linear rate instead of super-linear:** Our result provides a global rate and also covers natural gradients which can only have a linear rate. Super linear rates hold only locally. - **Advantage of GN/natural gradient with a linear rate compared to GD:** The advantage comes from an **improved conditioning** compared to GD. Note that natural gradient methods can only come with a linear rate, have a similar cost pre iteration as GN and yet are of interest. Our result covers natural gradients as well. - **Novelty of the result: Mean-field vs NTK regime** Obtaining global convergence results in the mean-field limit is still an active area of research, even for gradient descent ([10,12]) and does not rely on strongly-convex-like arguments as in the NTK regime. This work is the first to provide a convergence result for GN in the mean-field limit. From these clarifications, do you still have other doubts based on which you recommend rejection? Thanks again for your time. --- Rebuttal Comment 1.2: Title: Thank you for your response. Comment: I thank the authors for the clarification. I would like to clarify that in my original review, when I was saying "strongly-convex-like", I am referring to the case that you can lower bound the smallest eigenvalue of the NTK. *Although the objective is non-convex, the analysis is essentially a convex analysis since you are assuming the initial point is close to the global optimal (in proposition 2) which can help you avoid dealing with the non-convexity by utilizing the positive smallest eigenvalue of the NTK.* With such an assumption, the convergence analysis of NTK and mean-field are essentially the same. There are further issues like in Equation 7, you are assuming the damping parameter is chosen to be some positive constant multiplied the smallest eigenvalue of the NTK. Although this later help you prove convergence, why this assumption is practical? Estimating the smallest eigenvalue of the NTK is costly in practice, let alone obtaining the exact value. Therefore, for now, I am keeping my original evaluation. --- Reply to Comment 1.2.1: Title: On the technical contribution Comment: Dear reviewer, We thank you for your comment. Below, we provide clarifications about the damping being optional in our results and about the technical challenge addressed in our proofs and that arise from the mean-field regime. We would be grateful if you could let us know if you still have points that require more clarifications. 1. **The main technical challenge in the proof is to control the smallest singular value in the mean-field regime.** We agree with you that, once a lower-bound on the smallest singular value is assumed to hold, the convergence rate follows easily as shown in the proof proposition 1. However, the challenge is to control the smallest singular values which can get arbitrarily close to 0 in the mean-field regime. We achieved that through three technical propositions stated in the appendix (prop 5, 6 and 7) which are then used to prove the result in proposition 2. We summarize the difference between NTK and mean-field regime captured by these results: - **In the NTK regime**, as $M\rightarrow +\infty$, the NTK matrix becomes constant during the optimization dynamics. Therefore, the smallest positive singular value also reamains positive and constant. - **In the mean-field regime**, the NTK matrix is not constant during the training dynamics. Even worse, its smallest positive singular value can become arbitrarily close to 0 in general. This happens, for instance, when the variance of the initial weights gets smaller and smaller. Our contribution is to analyse the dynamics of the smallest singular value during optimization and find conditions on the initial weights to prevent it from vanishing thus ensuring convergence. If the reviewer has a reference in mind where such dynamics of the singular values is studied and controlled in the mean-field regime for Gauss-Newton, we are ready to revise our statement of the contributions. 2. **"You are assuming the initial point is close to the global optimal "**: - Our assumption does not require the initial point to be within a ball of any global optimum (there is a whole manifold of global optima as we detail next). As stated in proposition 2, it consists in randomly selecting the hidden layer and tuning the linear layer only until the functional gradient is below a pre-determined threshold. Note that this condition allows the hidden weights to evolve in an arbitrarily large radius $R$ within the initial ones provided the linear layer provides a good enough fit (as measured by the norm of the functional gradient). - A condition of a similar nature has been used in Chizat 2019 [13], Theorem 3.3 and Corollary 3.4 to establish global convergence of the the Wasserstein gradient flow in the over-parameterized regime of two-layer networks in the mean-field limit. We discussed this in L237-241, but will make it clearer. - This condition is easy to achieve in practice, since optimizing the linear layer only is guaranteed to minimize the functional gradient norm arbitrarily well. 3. **"the analysis is essentially a convex analysis"**: As shown in proposition 2 of [29], the loss landscape of over-parameterized neural networks is non-convex even locally near global optima. But we agree that controlling the smallest positive singular value of the NTK matrix is a key step and in fact the most challenging part in our result. 4. **Damping parameter is optional: it can be equal to 0**: - **Theory:** Please note that the parameter $\alpha$ is assumed to be non-negative only (can be equal to 0, see L180). Our proofs do not rely on such damping at all. We will make it clear in the text. - **Experiments:** We observed little difference when either using damping or not, i.e. setting alpha= 0 or 1 did not change the overall conclusions. That is because the smallest singular value was very small in practice.
Summary: This work theoretically and empirically studied learning dynamics and generalization properties of one-hidden-layer networks trained with the Gauss-Newton algorithm. The main theoretical result is that the authors provided a loss convergence guarantee, which is provably faster than the same networks trained with gradient descent. The authors then empirically studied generalization properties of the GN-trained NN in a student-teacher setting. Strengths: The theoretical treatment of the GN in Sec. 4 is well appreciated. While I did not check the correctness of the proof, it appears intuitive that GN methods have better convergence. Weaknesses: My primary concern is the relevance of the work. My impression is that GN method is virtually never used in NN training, and a brief literature search turned up only a few papers discussing this. This is simply due to the dramatically higher compute demand compared to GD, as the authors mentioned. The size of the Jacobian is N by N, where N is the number of trainable parameters. So training any modern NN with GN can be prohibitively expensive, since it involves pseudo-inversing the Jacobian at every step. While the author mentions other methods such as natural gradient descent, which are in spirit connected to GN, it is unclear how the theoretical/empirical results in this work generalize to those methods, which may be more relevant to this conference. My second concern, which I am less confident of, is that the fact that GN converges faster than GD (at least in terms of the big O analysis) comes as no surprise. That GN converges faster than GD in a nonlinear least-squared problem appears to be well known (https://www.princeton.edu/~aaa/Public/Teaching/ORF363_COS323/F14/ORF363_COS323_F14_Lec9.pdf). I understand that additional technical requirements need to be proven in the one-hidden-layer NN case, but I'm not sure that they are of importance conceptually. Finally, the empirical part of the paper and the theoretical paper part are pretty unrelated, besides the fact that they both involve one-hidden layer networks. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. How can we generalize the results in this work, theoretically or empirically, to approximate second order methods (e.g. natural GD) that people use to train NNs? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 1 poor Limitations: See weaknesses. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your review. We believe there is a misunderstanding of the relevance of our contribution. The following clarifications should address all your concerns. If you still have other concerns, we are more than happy if you could kindly share them as this would help us improve this work. 1. **Relevance:** We believe there is a big misunderstanding here: - **Natural gradient is covered in our work as discussed in L176.** It corresponds to taking the matrix H to be the identity as discussed in L176. The result of proposition 2 holds for this case as well. The experimental section corresponds also to a natural gradient on a Gaussian model where the network represents the meanwhile the variance is fixed. In that particular case, GN and natural gradient have the same updates (see ref [31] ). Hence, our work shows that GN and Natural gradient can find solutions that generalize well. We believe this to be highly relevant to the community interested in second-order methods for deep learning. - **The computational cost:** Does not require inverting a matrix of the size of the parameters**, only the size of a batch of data. This is thanks to the Woodbury matrix identity that is commonly used for these methods. We use it on a full batch of data and discuss it in L282-284 (note that in the paper N denotes the sample size not the number of parameters). Using smaller batches is scalable and was shown to work well in practice (see ref 11). In our setting, we use a full batch as we focus on the ideal algorithm GN without introducing additional biases or stochasticity in the estimation which can interfere with the interpretation of the results. This approach allows us to understand the ideal performance that a scalable approximation of GN can aim towards. 2. **Novelty/significance of the convergence result:**: - The convergence result you mentioned in the link holds for invertible Hessians and is only local. This is not applicable to over-parameterized networks where the hessian is degenerate by construction. This makes the dynamical system prone to diverge in finite time (blow-up) and is more challenging to deal with. Our result precisely handles this possible blow-up (see section A.2 of the appendix) using non-standard techniques that go beyond convex analysis and that require dynamical systems analysis instead. - **“I'm not sure that they are of importance conceptually”:** The convergence result of GN might seem intuitive when thinking about the invertible hessian case. However, there is a gap between conjecturing the result from this particular case and rigorously proving it in the more challenging scenario of non-convex over-parameterized networks. Our result relies on non-standard optimization techniques which can still be insightful. In particular, the technical tools used come from dynamical system analysis (as discussed in L225-230) and are different from convex analysis often exploited to establish convergence of GN. 3. **Theoretical vs empirical parts:** The motivation for using GN/natural gradient is the faster convergence rate for the training objective. However, these are known for convex objectives with non-degenerate hessian. Two questions stand in the way to understand if GN/natural gradient is useful for learning over-parameterized neural networks: - When can GN reach a global solution for the training objective? The theory part addresses the first point and focuses only on the training objective. The empirical part confirms the global convergence since GN systematically obtains 0 training error faster than GD. - Does such a solution generalize well to test data? The empirical part studies the generalization of properties of the solution obtained by GN compared to GD. It provides two practical insights: - The commonly used prescription for GN is to use the largest possible step size to achieve faster convergence of the training objective and benefit from the faster rate of GN/natural gradient. We show that this results in poor generalization (similar to random features without learning the hidden layer). - Instead, we show that smaller step sizes for GN allow better feature learning and are thus desirable but they come at a higher computational cost. --- Rebuttal Comment 1.1: Title: Thank you Comment: My sincere thanks for the authors' detailed response. I hereby acknowledge that I have read the rebuttal. I appreciate the clarification of the connection to natural gradient descent as well as the computational cost. This does allay a major concern of mine. In terms of the novelty/contribution, I admit that such judgment is always subjective, but I am still not convinced. In the rebuttal the authors allege that the work is novel because this is in the overparameterized regime. But ref 11 that the authors cited, for example, supplies a convergence analysis in the overparameterized limit. Again, I understand that this work covers a technically different scenario than previous work, but I do not find the conceptual picture different enough. Finally, for the theory-empirical gap, I still think that the experiments are not well motivated by the theory (reviewer c9sb raises a similar concern). I think the experiments could be an interesting research project on its own (an extensive empirical study of the best LR to use for GN for generalization). I have raised my score in response to the rebuttal. --- Reply to Comment 1.1.1: Comment: Dear reviewer, Thank you for engaging with us during the discussion period and for remaining open to raising your score as a response to our comments. We are also glad to read that you find our experimental contribution to be an interesting research project on its own. We would also like to bring to your attention the updated results on MNIST that we provided in response to reviewer c9sb as they further support our paper. As we understand it, the remaining concerns are about the **theoretical contribution** and the **connection between theory and experiments**, which we will address below. Please let us know if you have additional questions. 1. **Novelty of the theoretical result**: - **"The authors allege that the work is novel because this is in the overparameterized regime:"** We apologize if we gave the impression that the overparameterized regime was the sole reason we claim our work to be novel, we will make that clearer in the text: The novelty comes from the conjunction of two settings: **Overparameterized setting** and **mean-field limit**. The paper [11] considers the **overparameterized setting in the NTK limit** since they use the limiting properties of a network with a scaling of $1/\sqrt{M}$ in their analysis. We use a scaling of $1/M$ which yields a different limiting object: mean-field limit. While this might sound only a technical difference, it has important conceptual implications as we detail next. - **Technical difference**. Obtaining global convergence results in the mean-field limit is still an active area of research, even for gradient descent ([10,12]). We believe this work to be the first to provide a convergence result for GN in the mean-field limit. This requires a different analysis than the NTK limit. Indeed, since the NTK limit is essentially a kernel method with hidden weights remaining constant during optimization, the model behaves as a linear model and convergence results follow from a strong convexity-like argument. This is not the case for the mean-field limit where the evolving hidden weights result in a highly non-convex objective. We will add a paragraph in the related work section to emphasize this difference. - **Conceptual difference:** The mean-field limit is relevant for obtaining good generalisation in neural networks. It is now well documented that the NTK limit is equivalent to a kernel method/linear model. Meaning that only the linear layer is effectively trained, while the hidden layers remain almost close to initialization [1,24] . On the other hand, the mean-field limit allows feature learning, i.e. hidden layers change during optimization to learn a good data representation. This results in better generalisation (see also Figure 5 of the appendix which illustrates the difference in generalisation). The mean-field limit is more consistent with what is observed in practice when training a neural network that generalises well. It is therefore of interest to consider this particular setting. 2. **Theory-empirical gap**. We apologize if the transition between experiments and theory gives the impression of a gap. We will add an explicit discussion between the two parts in the experiment section to emphasize the connection between the two. - **Reviewer c9sb acknowledged it was no longer a concern and raised their score to accept**. Please see our response to reviewer c9sb and their new reply. - **The experiments are consistent with the theory:** Figure 2 (bottom right) shows that the training objective converges faster when using GN than when using GD as illustrated by Proposition 2. Moreover, in all experiments, GN optimises the training objective globally, as predicted by the proposition. Note that in practice, the global optimum is reached even without the initial condition on the linear weights of proposition 2. This suggests that the condition could be relaxed. Please also refer to the **Pre-training the last layer to near optimality** paragraph in the answer to reviewer c9sb on this matter. - **The experiments provide additional insights not covered by the theory:** The experiments illustrate that obtaining a good generalisation requires more than just optimizing globally the training loss. While Proposition 2 provides a global convergence result for the training loss, it says nothing about feature learning and is “happy” with the features staying near initialization as long as the loss reaches a global solution. However, the amount of feature learning during optimization is what essentially drives generalisation performance. Our experiments illustrate this precisely and thus puts the result of proposition 2 into perspective.
Summary: This paper investigates theoretically and empirically the implicit biases of the Gauss-Newton (GN) optimization algorithm on over-parametrized one hidden layer-models (e.g. the capacity of the model is much higher than that of the ground-truth function to approximate). The main theoretical contribution of the paper pertains to global optimization: a global convergence rate is derived when dynamics do not blow up in finite time, and to provide sufficient conditions on the initial state of the model to prevent this blow up. The experimental section explores generalization: it compares GN and Gradient Descent (GD) for various initial hidden weights variances and learning rates. The key take-away of this last section (and empirical prescription thereof) is that smaller learning rates are better for GN (while the converse holds for GD) and that GN under optimizes the linear layer (e.g. opposite of the lazy regime) while learning good hidden features. More precisely: - Section 3.1 introduces the hypothesis at use on the ground-truth data, the learning objective (Eq. 1) and the model at use, in the finite width (Eq. 2) and mean-field infinite width (Eq. 3) cases. Overparametrization (A) is defined in terms of the invertibility of the feature kernel. - Section 3.2 defines the generalized GN vector field (Eq. 5) and defines associated discrete and continuous weight dynamics (Eq. 6). Hypothesis on the parameters coming into play in the GN dynamics are introduced (B and Eq. 5) in order to derive subsequent global convergence results. - Section 4.1 presents Proposition 1 which states that under the aforementioned assumptions, the GN dynamics either blow up or converge to a global minimizer of the loss with an explicit rate which only depends on specific constants characterizing the functions at play, which is in stark contrast with SGD convergence rate which is controlled by the small singular value of the Neural Tangent Kernel (NTK). - Section 4.2 presents Proposition 2 which states that it is sufficient, for the dynamics not to blow up and converge that the rate derived previously, that the linear layer is near optimal (e.g. $\\|\nabla_v \mathcal{L}(f(u_0, v_0))\\|$ sufficiently small at initialization). - Section 5.1 presents the teacher-student experimental setup, where the ground-truth data $X$ is Gaussian (with dimension $d=10$) with associated labels $Y$ generated by a shallow one hidden-layer model (5 hidden units) and the student model is a wide/overparametrized one hidden-layer model (5000), both with a ReLU activation. The three optimization algorithms at use are presented: GN, GD and random features (RF) whereby the hidden weights are random and the linear weights are taken as the minimal-norm least square solution (Eq. 9). - Section 5.2 presents the performance metrics, i.e. the weighted cosine distance (WCD) between teacher and student features (which allows for teacher and student features of different dimensions, promotes feature alignment and penalizes feature norm) and the test loss after linear re-fitting (Test-LRFit). Test-LRFit is particularly useful to disentangle the analysis of the whole model performance *versus* the quality of the learned hidden features, and subsequent identify when GD and GN operate in the kernel/lazy or feature-learning regime. - Section 5.3 presents and analyzes training experiments, with varying standard deviations of the visible-to-hidden weights (which we denote here $\sigma_u$ for simplicity) and learning steps. Key results are as follows: + Upon increasing values of $\sigma_u$, GD and GN both exhibit a transition between the *feature learning regime* (performance is better than the RF baseline) and the *kernel regime* (performance matches that of the RF baseline). + In the feature-learning regime (small $\sigma_u$), GN operates best (in terms of the resulting test loss) with a small learning rate ($\approx 10^{-3}$) with performance degrading onwards while GD operates best as the learning rate increases ($\approx 10^{2}$). GN best performance is better than GD best performance. + GN under-optimizes the output linear layer when employing small learning rates while learning good hidden features (as measured by the gap in test loss before and after Linear re-fitting). + GN starts learning the internal features (as measured by test loss after linear re-fitting until iteration $\approx 10^4$) and output features onwards (as measured by the test loss). Conversely, last layer and hidden layer are learned simultaneously with GD (i.e. test loss and test loss after linear re-fitting coincide throughout training). + GN converges faster with larger learning rates, but achieves best generalization when learning rate is small. Therefore good generalization with GN comes at the cost of slower convergence. + Finally, across all experiments, the WCD metric is a good proxy to the test loss. Finally, the Appendix provides very detailed mathematical proofs and extra experimental results (e.g. use of the SiLU activation function instead of ReLU, experiments on MNIST). Strengths: - This is a beautifully written paper that is readable for non-super experts of this literature (as I am). - The maths in Appendix are extremely rigorous and neat (I carefully checked the proofs). - The conditions under which the derived theoretical guarantees hold are apparently less stringent than equivalent studies on GD (e.g. any activation function, initial weight not constrained to lie within a radius determined by the smallest singular value of the NTK). - The experimental set up for the teacher-student study is sound and the RF baseline is extremely informative about what is going on. Weaknesses: I should state upfront that I am not an expert of this literature, so I may not be aware of the expected theoretical and experimental standards that lead to acceptance. This being said, I'm overall disappointed by the **experiments**: - **Most importantly**: the MNIST experiments (last page of the appendix) seem to contradict the main findings that GN has to operate in small learning rate regime to work best and that GN would fail to optimize the last (linear) layer -- the gap between test loss and test loss after linear re-fitting is even greater for GD than GN! In spite of this glaring observation in stark contrast with the results appearing in the main, the authors simply do not comment upon this phenomenon at all (L.744-747). - Apart from the choice of the model which sticks to the mean-field limit introduced in Section 3 (e.g. large number of hidden neurons, normalizing the model output by M), the theoretical and experimental parts are very disconnected. While I do understand that the intention was to cover optimization and generalization in a complimentary fashion (the former theoretically, the latter empirically), theory does not drive much the design of the experiments, and experiments are not crafted to validate the theory. I make some suggestions in the questions below. - For the experiments: while I do understand that the choice of a one-hidden layer teacher-student setting seems to be a standard in this literature, it lies in between something that could be even more theoretically tractable with controllable analytical quantities (e.g. a toy quadratic optimization problem with known curvature) and something more realistic (e.g. multiple hidden layers model). So I'm twice frustrated that the theory hasn't been checked in an even simpler setting where everything is analytically well controlled, nor empirically tested against a model that is more complicated, to see if the proposed empirical prescriptions hold beyond this simple setting. - I'm skeptic about the stopping criterion used for GN and worried that **it might skew all the experimental results**. - In spite of all the rigor and clarity of the proposed study (which I do appreciate a lot), there is unfortunately no concrete practical prescription to use the GN algorithm. Even more so because the results on MNIST contradict those obtained in the teacher-student setting. Technical Quality: 2 fair Clarity: 4 excellent Questions for Authors: Important questions/remarks: - My most important remark (hilighted above) is that I suspect your MNIST experiments are not included in the main because they directly contradict the analysis performed on the regression experiments: GD and GN exhibit the exact same trends, GN works best with large learning rates and optimizes the last layer even better than GD. If you have time by the end of the rebuttal phase, I would be very happy if you could "fix" the experiments, or make sense of the observed discrepancies between the regression problem and MNIST. - Could you please define precisely what you mean exactly by "blow-up"? You directly say in the main that if w is defined over a finite horizon ($T < \infty$), then it is said to "blow-up". Then you say that blow up happens when the minimal singular value of the NTK is zero. My impression is that there is not enough intuition in the main as to why this is the case. Looking at the appendix, my own intuition is that when $\sigma^\star(A_w)$ approaches zero, $J_w^\dagger$ diverges (Eq. 15 in the Appendix), and since the GN vector field is "roughly" $\Phi(w) \approx J_w^\dagger \cdot \nabla \mathcal{L}$, the GN vector field itself diverges. Is this intuition correct? Could you please include such intuition in the main to better define what "blow up" is precisely? - You say in the main that "GN provably faster than GD". Could you please write down explicitly the global convergence rate of GD to be able to compare it directly with that of GN derived in Proposition 1 (even if it is a well-known result)? However, the comparison does not seem this straightforward when reading L.196-197. I'm just wondering how much writing GD global convergence rate would enlighten this part. - I would have been very happy to see an empirical validation of Proposition 2 on a hand-crafted toy model, even simpler than the teacher-student setting, e.g. $\mathcal{L}=\frac{1}{2}(\theta - \theta_\star)^\top \cdot Q \cdot (\theta - \theta_\star)$? On such problems the properties of the spectrum of $\mathcal{L}$ could be well controlled. Taking $dim(\theta) = 2$ and visualizing trajectories in case of convergence, divergence, and showing the trajectory of GD as well would have been nice to see! - L.223, you write: "The near-optimality condition on v0 can always be guaranteed by optimizing the second variable v while keeping u0 fixed". So why didn't you put it in practice in your own experiments? Especially because you observed that the last layer was not training so well. I'm not saying though that all GN experiments should have been performed with v-pretraining, but extra experiments could have been included. To make myself clearer, here is the gist of some experiment I would have been happy to see: - identify a failure mode of GN in the student-teacher setting -- i.e. the weight dynamics diverge. - show that by pre-training $v$, the problem vanishes (i.e. the weight dynamics converge to an optimum). - demonstrate it on both the teacher-student problem, and on more complex problem like MNIST. - L. 297: why is the stopping criterion for GN lesser than that of GD ($K^{GD}=10^6$, $K^{GN}=1.5 \times 10^5$)? Is this because you observed blow-up? This choice is glaring in Fig. 2, bottom left panel and looks very strange: the GN learning trajectory stops at $10^5$ iterations and as the loss was gently decreasing, I would have been happy to see it go even lower! The bottom line is that the stopping criterion for GN looks quite arbitrary and cherry-picked. Why not using an $\epsilon-$relative-change-of-loss stopping criterion, namely: stop the simulation where the parameters have converged up to precision $\epsilon$? - L.336: "this is unlike GD where larger step size yield better performing features". Looking closer at Fig. 1: indeed, the test loss decreases until $\lambda=10^{-3}$, but what happens beyond? Here again, the cut seems arbitrary. And as we expect, when looking at the blue curves (GD) Fig. 6 in the Appendix, the test loss for GD does increase beyond $\lambda=10^{-3}$. It is not a major issue, however I'm pointing out that the experimental analysis in the main is skewed again by arbitrary choices. Minor questions/remarks: - For non-experts like me: could you please an intuition why overparametrization (A) is defined this way? Perhaps this is something obvious for researchers of this community. - You say in the main that your theoretical results hold both in the kernel and feature learning regime. However, I do not understand why - here I am certainly missing some subtleties. Perhaps it would have been helpful to add background knowledge in the appendix about the exact definition of the feature and kernel limit. Is this about normalizing the model output by $1/\sqrt{M}$ (kernel) or $1/M$ (mean-field, as done in this paper)? - In the same vein, I'm confused about whether the kernel/feature-learning *regime* is the same as (or related to) the kernel/mean-field *limit*. For the paper to be self-contained, it would be useful (at least for me) to add more background knowledge in the Appendix. - L.207: could you explain the intuition? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 4 excellent Contribution: 2 fair Limitations: The main limitation of the paper is the quality of the presented experiments, the fact that they are not consistent across two different learning problems, and that the results presented in the paper do not lead to a concrete prescription. For the sake of having the proposed work prescribe concrete takeaway prescriptions to use the GN algorithm in practice, I would recommend accept by the end of the rebuttal phase, once: - the results you obtain on MNIST (on a single hidden layer architecture, consistently with the student-teacher setting) are consistent with the results presented in the main. - the choice of the stopping criterion for GN is justified. **POST-REBUTTAL UPDATE**: the authors have thoroughly addressed my concerns, so I increased my score to accept. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your extremely precise and outstanding review! We are very pleased to read that you found the paper well-written and that you took the time to carefully check the proofs. We hope our answer clarifies all the remaining points. I. **Main points:** 1. **MNIST experiments:** We agree Figure 8 exhibits a different behavior than the experiment in the main. This is attributed to the training objective that is under-optimized for small step-size and thus ‘mechanically’ results in lower performance compared to larger step sizes. This would illustrate the trade-off between good generalisation vs computational cost due to smaller step sizes. Due to time constraints, we are unfortunately unable to provide further experiments on MNIST, but we agree that it would be valuable to strengthen those results. 2. **Connection between theory and experiments:** - While Proposition 2 provides a global convergence result for the training loss, it says nothing about feature learning and is “happy” with the features staying near initialization as long as the loss reaches a global solution. The experiments illustrate a limitation of proposition 2 by showing that: It is not enough to globally optimize the training loss to get good generalization (as suggested by prop 2). Instead, the amount of feature learning during optimization is what essentially drives generalization performance. - Pre-training the last layer to near optimality: Please refer to the section reserved to this point. 3. **Motivation for the 1 hidden-layer model:** This is one of the simplest intractable models for which the dynamics of GN can diverge and can have non-global solutions. We will clarify this in the text and in particular discuss simpler and more complex models as follows: - Linear models with a strongly convex objective (like a quadratic problem) always converge to a global solution. The dynamics is indeed much simpler to analyze since the NTK matrix A_w remains constant by construction. In that case, there is no need for the initial condition of prop 2 and a similar convergence rate can be obtained. Moreover, on a quadratic problem GN converges to the exact solution in a single iteration. As this behavior is well documented, we decided not to include this case. - More complex networks: We do not make claims on deeper/more complex networks as this would different analysis that goes beyond the scope of this paper. We believe this work is a first step towards studying GN for more complex models. 4. **Stopping criterion for GN:** Apologies for the confusion. We had a maximal time budget of 24h per job (we run 720 jobs in total) to control the overall resources allocated to this project. We are currently running GN with a longer budget. We expect it to decrease smoothly and will update the result as soon as we get it. - In all cases, the time limit ensured the training loss (relative to its initial value) was below 10^{-5} and, in most cases, it was below 10-7 (see figure 2 right). - Since the cost per iteration of GN is large, the time budget was exhausted at an error of 10^{-5} when using small step sizes for GN. 5. **Practical prescription:** - A commonly used prescription for GN is to use the largest possible step size as it achieves faster convergence of the training objective. We show that this results in poor generalization (similar to RF). - Instead, we show that smaller step sizes for GN allow better feature learning and are thus desirable but they come at a higher computational cost. - Ultimately, there must be a tradeoff in the step size to achieve the best generalization at the lowest computational cost. An interesting research direction would be to quantify this optimal choice theoretically, but this is beyond the scope of this work. II. **Major questions:** 1. **Definition of “Blow-up”:** Apologies for the confusion, your intuition is correct and we will clarify in the text. 2. **Comparing convergence rates of GN and GD:** Thank you, we will provide the rates of GD for more clarity. - For GD: e^{-(mu sigma_min)/4 t} where sigma_min is the smallest eigenvalue of the NTK matrix A_w. - For GN: for instance, when choosing H to be the identity (natural gradient) so that L_H =\mu_H=1 and by choosing alpha = 1, the rate is: e^{- t mu }. The eigenvalue sigma_min can be arbitrarily small, thus drastically slowing down the convergence rate of GD. This is not the case for GN. 3. **Pre-training the last layer to near optimality:** We will clarify that the RF baseline we provide is the extreme case where we train the last layer to exact optimality (instead of near optimality). In that case, the gradient vanishes and it is not possible to move away from the RF solution (which is a global one), so that feature learning is not possible. The intermediate case where the last layer is only near optimal can allow changing the inner weights to some extent but this gets worse as the last layer is closer to optimality. - The failure mode of GN (diverging dynamics): We have not observed the divergence of the dynamics empirically. 4. **Larger step-sizes** We noticed that larger values do not allow GD to converge to a minimizer (as you noted in Fig 6, you can see that the training loss remains large). Please note that this non-convergence for large step sizes is expected in optimization theory: there is a maximal step size beyond which GD with constant step size cannot converge, usually, this value is found by trial and error, which is what we did as best as our compute power allows. We will include clarifications for: - Overparameterized model: we mean a model that can achieve zero training error on some fixed training set. The condition (A) ensures there exists a parameter value $(v_0,u_0)$ achieving 0 training error (which also explains the sentence in L207). We will include a simple proof to show that. - NTK/mean-field regimes/limits. --- Rebuttal Comment 1.1: Title: Post-rebuttal update Comment: Hi! Thank you for your kind answer. As per NeurIPS policy, I hereby acknowledge I thoroughly read your rebuttal. **I. Main points**: **1. MNIST experiments:** in spite of your answer, this seems to contradict your main claim in the paper that "GN achieves an improved generalization when using smaller learning rates". My straightforward suggestion would be to re-run these experiments **using more epochs** to balance the reduction of the learning rate and see if the expected behavior (that described in the main) is recovered. **2. Connection between theory and experiments:** OK **3. Motivation for the 1 hidden-layer model:** OK thank you for making this clearer, I realize my suggestion of the quadratic optimization toy problem was a naive suggestion. **4. Stopping criterion for GN:** OK, thank you for this clarification. I'm sorry to read you had these limitations for the computation ressources, especially for the MNIST experiments, this is unfortunate. Looking forward to having your results, and hopefully by the end of the rebuttal phase! **5. Practical prescription:** thank you for clarifying this. It would be great to make an addition along these lines, in the same very simple vein, along with your updated experimental results. **II. Major questions:** **1. Definition of “Blow-up”:** OK. **2. Comparing convergence rates of GN and GD:** OK. **3. Pre-training the last layer to near optimality**: I do understand that the RF baseline brings the last layer to optimality. Is is also how you ran your MNIST experiments? If this is the case, my apologies for missing this. As to the divergence behavior: this would add an invaluable addition to the paper (if possible) to empirically investigate when divergence happens on tractable problems and see if it can be theoretically accounted for (e.g. analyzing the spectrum of the NTK as you mention it in the main). **4. Larger step-sizes**: OK thanks for the clarification. My most important takeaway: **I am really ready to increase the score to firm accept if you re-run your experiments with a larger number of epochs and see if the expected behavior (that presented in the main) is recovered**. I do take into account the constraints you have on the compute cluster at your disposal, but could you really not run these experiments at all on a laptop running overnight? I'm thinking that running MNIST training experiments should be tractable with sufficient RAM on a laptop. --- Reply to Comment 1.1.1: Title: MNIST experiments Comment: Dear reviewer, Thank you for taking the time to engage with the discussion and for encouraging us to improve the experiments. We have been re-running the MNIST experiments with a few modifications and we are pleased to announce that the new results are consistent with the rest of the paper. We will discuss the remaining points in a separate answer. Below, we provide a table of the results we obtained as well as a description of the experimental setup and comments on the results. Please let us know if you have any question about these results. 1. **Comments:** From the result table, we observe that: - Test loss after linear-refit steadily increases with the lr in the case of GN while it tends to decrease in the case of GD. - Hidden learning is strong in the case of GN: for large values of test and train losses, the test loss after linear refitting is as low as 52.4 for the smallest step-size. This is much less pronounced in case of GD. - Note that for the largest step-sizes (1e2, 1e3) GD diverges. 2. **Setup:** Compared to the MNIST experiment of the submission: - We train for 100000 iterations and set a stopping criterion when the training loss is below 1e-6. - **We used a smaller initial std (0.001 instead of 1):** This allows us to be in a regime where GN and GD have different behaviour. Indeed, from Figure 1 (left) of the main, one can see that generalisation of GD and GN was similar for std =1, while there was a clear benefit for GN for smaller std. We recover this behaviour in the MNIST experiment. - **Squared loss on the logits instead of the KL:** This choice allows us to get as close to the main experiment as possible. Besides, the KL is not strongly convex while the L2 loss is, which is more consistent with the setup of the paper. 3. **Results:** | step-size | 1e-3 | 1e-2 | 1e-1 | 1 | 1e1 | 1e2 | 1e3 | |-------------------------------|--------|--------|------|------|------|-------|-------| | GN: Train loss | 330. | 96. | 15.5 | 1e-6 | 1e-6 | 1e-6 | 1e-6 | | GN: Test loss (linear-refit) | 52.4 | 53.2 | 54.4 | 56.0 | 56.0 | 55.1 | 60.9 | | GN: Test loss | 337.14 | 134.11 | 66.9 | 56.5 | 56.4 | 55.6 | 60.9 | | GD: Train loss | 200.6 | 61.5 | 1e-6 | 1e-6 | 1e-6 | div | div | | GD: Test loss (linear-refit) | 165.4 | 88.2 | 64.6 | 64.3 | 59.1 | div | div | | GD: Test loss | 208.9 | 88.3 | 64.8 | 64.4 | 59.2 | div | div |
Summary: The authors study the Gauss-Newton optimization algorithm in the overparameterized setting. They derive conditions for convergence of the Gauss-Newton algorithm with parameter-dependent damping in terms of the convexity of the loss, the smoothness of the loss Hessian, and the damping constant. They then carry out an empirical study with a student-teacher, one hidden layer setup (overparameterized student), and show that Gauss Netwon in this setting requires small step size and small initialization to be successful. They also show that GN requires a larger number of steps but seems to induce more feature learning. Strengths: The idea of studying GN in the feature learning limit and trying to understand its implicit biases is an interesting one. The convergence analysis is new to my knowledge, and the experiments focus nicely on key issues of learning dynamics from second-order methods. Weaknesses: The algorithm as presented requires a parameter-dependent damping constant, which requires computation of the smallest non-zero NTK eigenvalues. These values are very difficult to compute in practice, and the algorithm is generally impractical. The theory consists of a convergence result for the aforementioned algorithm, and it’s not clear how relevant it is in practice. Part of the issue with second order methods like GN are their impracticality in large model, large data settings. The experiments are not on any realistic datasets, and don’t suggest any promising ways forward for development of GN methods. There may be connections to other mean field regimes which are not discussed in the text. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: What is the connection of the mean field analysis to the regimes from these two papers? https://proceedings.neurips.cc/paper_files/paper/2022/hash/d027a5c93d484a4312cc486d399c62c1-Abstract-Conference.html https://proceedings.mlr.press/v139/yang21c Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 3 good Contribution: 1 poor Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your review. We are glad you find our study of the generalization and implicit bias of GN to be interesting. We believe the following clarifications should address all your concerns. If you still have concerns, we are more than happy if you could kindly share them as this would help us improve this work. 1. **The damping is optional:** Theoretical results in Proposition 2 cover the case without damping ($\alpha = 0$). This is stated in L180 where we say "\alpha is non-negative". However, for more clarity, we will discuss the particular case when $\alpha =0$. The proof does not rely on alpha>0. In our experiments, the smallest singular value comes for free after performing an SVD on a matrix of the size of a batch of data. 2. **Practicality and computational cost:** GN requires solving a system of size N (N is the size of a mini-batch). More importantly, this does not require computing a matrix of size M (number of parameters). This is thanks to the Woodbury matrix identity as we discuss in L282-284. In our setting, we use a full batch, but using smaller batches is also possible and works well in practice see ref [11]. In this work, we focus on the ideal algorithm GN without introducing additional biases or stochasticity in the estimation which can interfere with the interpretation of the results. This approach allows us to understand the ideal performance that a scalable approximation of GN can aim towards. 3. **Connection with mean field analysis:** Thank you for the reference, we will include it. As pointed out in their work, the mean-field regime they discuss is the same as the one studied in Chizat 2018, all of which we discussed in the introduction and L140-149. Note that, all these works consider the gradient flow dynamics while, here, we consider the dynamics of Gauss-Newton in the mean-field limit. Hence, the analysis is not directly applicable to GN, although the tools considered might be useful for future work. --- Rebuttal 2: Title: We would be grateful if you engage with us during the discussion period Comment: Dear reviewer, We thank you for your time. Since the authors/reviewers discussion period is still ongoing, we would be grateful if you give us the opportunity to engage in a constructive discussion about our submission. We believe we responded to the main three concerns you had about the paper: - The damping is optional: the theory result holds even without damping. - The computational cost can be easily reduced: thanks to Woodbury matrix identity and the use of mini-batch. - The mean field limit is clarified: The paper you mentioned recovers the same mean-field limit we consider and which is equivalent to the one considered in [14,50]. Based on these clarifications, do you still recommend rejection? Do you have any other doubts? We are grateful that two reviewers increased their scores based on the constructive discussions we had with them, including c9sb who now recommends an accept with a score of 7. We hope you can give us a chance to improve this work and clarify any concerns you might have. Thanks again for your time. --- Rebuttal Comment 2.1: Title: Response to author comments Comment: I thank the authors for their comments. Regarding the damping: while I understand that proposition 2 does not require damping, it does require being near the optimal solution. I am more concerned with convergence rates as in proposition 1. I also don't quite understand how minibatch SVD gives good estimates of the smallest eigenvalues of the full NTK - large eigenvalues can sometimes be stochastically estimated in that way, small eigenvalues cannot. The issue I had with the computational cost was the fact that one needs to solve a dataset x dataset linear system. In the case presented here, with the toy dataset, the dataset size is 500 only so it is not too bad; on larger datasets it would be impractical. Thank you for pointing out that this method can in fact be minibatched for larger datasets. How does the effectiveness of the algorithm depend on batch size? For larger datasets, practical batch sizes can be small compared to the dataset size, and minibatches provide poor/biased estimators of the true Hessian/NTK. Does this affect the usefulness of GN? I still have concerns over this point, and feel that more extensive experiments are required. Thank you for your response about the reference as well. --- Reply to Comment 2.1.1: Title: Clarifications about the scope of the paper and scalabitlity Comment: Thank you for your response and for further engaging with us in a discussion. Below, we clarify the remaining concerns starting by clarifying the scope of our work which is not about scalable approximations of GN, and then address each point in detail. 1. **Scope of the paper:** - **This paper does not aim to address the scalability of GN**, many prior works provided scalable approximations with statistical guarantees for related methods (see for instance [4] which can fall into our proposed framework for GN). - **This work rather asks the question:** Does GN have good convergence and generalisation properties? The answer to this question is important because, if positive, it justifies research directions towards accurate and scalable approximations of GN. - Our work suggests that GN indeed has good convergence and generalisation properties which is encouraging further research on making scalable approximations that retain these properties. It also highlights the Trade-off between optimization speed and generalisation that these approximations should maintain: Small step-sizes rather than larger ones favors feature learning rather than fitting the linear weights on random features. 2. **Effect of small mini-batch on the usefulness of GN:** This is undeniably an interesting question that is beyond the scope of this work (as discussed above). There have been prior works studying the quality of the approximation on a mini-batch, for example for the Wasserstein natural gradient [4] (which is a particular case of the generalised GN we consider (different choice for the matrix H)). There, convergence rates for the approximation are provided and experiments on image datasets show positive results. However, since our work is not about scalable approximations to GN, we will make sure we do not make any claim about the matter in the paper. 3. **Estimating eigenvalues of the full NTK form mini-batch:** - In our experiments, we computed the eigenvalue on the full batch of data. We never claimed that one can accurately estimate those from the SVD of a mini-batch NTK. We will make sure this is clear in the paper. - That being said, there exist algorithms with statistical guarantees for estimating the smallest eigenvalues using mini-batches of data, and these date back to the seminal work of Oja and Karhunen, 1985 “On Stochastic Approximation of the Eigenvectors and Eigenvalues of the Expectation of a Random Matrix”. - Finally, as discussed in the previous points, scalable approximations using mini-batch are beyond the scope of this work. 4. **“Regarding the damping: while I understand that proposition 2 does not require damping, it does require being near the optimal solution”:** - We are unable to see the connection between the damping and initial condition being near the optimal solution, if you could please clarify what connection you have in mind, we can attempt to answer adequately. - We also apologies if from the statement it can be inferred that the requirement on the initial condition can be perceived as a strong requirement. We have a discussion after proposition 2 in L221-224 explaining that such a requirement is in fact very easy to enforce but randomly initializing the hidden weights and optimizing the linear layer alone. We will make this clearer in the text. 5. **“Concern about the convergence rate of proposition 1”:** Could you please clarify what your concern is about this proposition, as it is not clear to us from your message? The rate obtained is a linear rate that has an improved conditioning compared to gradient descent and thus illustrates the advantage of using GN for faster optimization of the objective. We thank you again for your time and we will make sure to answer any further questions you might have. Best, The authors.
null
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
DPOK: Reinforcement Learning for Fine-tuning Text-to-Image Diffusion Models
Accept (poster)
Summary: in this paper, the authors propose to improve the pretrained text-to-image diffusion model with human feedback in an online-reinforcement learning manner. The problem formulation, the differences between the online update and the supervised finetuning are clearly stated. Experiments show that the proposed method effectively improves the performance of the text-to-image generation ability. Strengths: 1. The paper is well-written and easy to understand. 2. The comparison and the potential advantages compared to related works are clearly discussed. 3. Experiments show that the proposed method improves the performance of the pretrained text-to-image model, also outperforms the supervised human-feedback finetuning baseline. Weaknesses: 1. Some claims and the results are connected weakly. The claims in Sec.4.3 should be better aligned with the experimental results. However, its current version makes me hard to find the direct mapping. 2. The title of this paper seems not to reflect the main contribution. In my opinion, the main contribution of this work lies in the online manner, since using reinforcement learning to fine-tune the text-to-image model has been explored before. It needs a better title. 3. There are some experimental problems. In my experience, the over-saturated problem in the SFT model can be alleviated by using small classifier-free guidance. Therefore, the necessity of using the online optimization is questionable. Also, why the authors do not directly compare with [17]? Technical Quality: 3 good Clarity: 3 good Questions for Authors: Please refer to the weakness part. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors discussed some limitations in the last part of the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely appreciate your valuable comments. We found them extremely helpful in improving our draft. We address each comment in detail, one by one below. **Comment 1. The connection between Section 4.3 and experiments:** It is hard to make a one-to-one mapping between each claim in Section 4.3 and the experimental results since Section 4.3 provides the overall combination of these differences as a comprehensive explanation of the benefits of using RL over SFT. Since RL and SFT are two very different methods with several differences, it could be hard to just select one difference and conduct a clean ablation study, due to the existence of other differences. Also, we do not claim that the superiority of RL over SFT is due to a single factor (or difference between them). **Comment 2. The necessity for online optimization:** First, **we propose online fine-tuning to better optimize ground truth rewards, and not just for solving the image quality degradation in fine-tuning**. Second, the over-saturated problem is just one aspect of downgraded image quality. There are other aspects that cannot be solved by adjusting the guidance weight (also note that only reducing the guidance weight could suffer from lower text-image alignment). In fact, fine-tuning with a limited distribution of images itself could suffer from worse image quality, so tricks like reconstruction loss have been proposed to solve such issues. Our KL regularization is performing a similar role. Finally, adjusting the classifier guidance scale did not make much difference for the SFT model in our experiments. **Comment 3. Comparison with [17]** **We did a comparison with [17] in our paper**, which corresponds to the SFT method without KL regularization. We added some theoretical justification and also tried some extra KL regularization methods on top of it. --- Rebuttal Comment 1.1: Comment: Thank you again for the review and we hope that our individual response (https://openreview.net/forum?id=8OTPepXzeh&noteId=7cJZSAKzdA) and overall rebuttal (https://openreview.net/forum?id=8OTPepXzeh&noteId=UFj7ZsHm99) have addressed your main concerns. We would greatly appreciate your feedback and please feel free to let us know if you have any other questions in the discussion period!
Summary: The paper introduces the idea of finetuning text-to-image diffusion models using reinforcement learning. The core idea is relatively simple: generate samples from a trained diffusion model, use the samples and a reward function to update the model's parameters, and iterate. The paper further introduces a KL divergence loss to prevent diverging too far from the original model's weights. evaluations are done on different prompts with respect to different capabilities/ performance metrics, such as colour or composition. Strengths: - an interesting and novel idea - simple method and easy algorithm - qualitatively pleasing results Weaknesses: - considering only 4 prompts for experimental evaluation is not sufficient - considering only a single prompt for bias evaluation is not sufficient - authors mention that "longer training time, hyper-parameter tuning and engineering efforts are required as the number of training text prompts increases". This indicates that their method does not scale to a larger set of training prompts, which questions its relevance - investigation of generalisation to prompts that are out of distribution should be conducted Technical Quality: 3 good Clarity: 3 good Questions for Authors: see weaknesses Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: limitations are briefly discussed, especially limitations mentioned in weaknesses could be discussed in more detail Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely appreciate your valuable comments. We found them extremely helpful in improving our draft. We address each comment in detail, one by one below. **Comment 1. Lacking experiments, training with more prompts, not scale to a larger set of training prompts** Thanks for your feedback. Please check our new results in the overall author rebuttal and global response pdf. In these documents, we have included experiments that contain training with a large variety of prompts, showcasing that **our method can also improve the average reward on much larger training sets with 104 MS-CoCo prompts and 183 Drawbench prompts (see Table 3 and Fig 1 in the response pdf).** **Comment 2. Generalization of out-of-distribution prompts:** **We conducted tests on unseen prompts, which consist of unseen objects, in Fig. 2 (b).** Notably, the fine-tuned models produced better images than the original model for these unseen text prompts. However, for better generalization, we need to train on a larger and more diverse set of text prompts. **We hope our response resolves all the concerns in your review, and please feel free to let us know if you have any other questions in the discussion period.** --- Rebuttal Comment 1.1: Comment: Thank you for the remarks and the additional experiments run. I have updated my score accordingly.
Summary: This paper proposes to use RL to finetune t2i diffusion models and use KL regularization into supervised fine-tuning of diffusion models. The paper shows that RL fine-tuning can avoid the overfitting that arises in SFT, and is generally superior to SFT with respect to both image-text alignment and image quality. Strengths: The motivation for using RL techniques to fine-tune T2I model is reasonable and compelling. The paper is well-written and presented. It is very easy to follow. The idea is simple and easy to implement. The proposed KL regularization technique is effective. The evidence provided in the paper supports the efficacy of this technique Weaknesses: The novelty of this paper is kind of limited. Similar ideas have already been proposed in [1]. I feel the experiment parts are a bit lacking. Only one policy gradient approach is tried in the paper and only 20K images are used for fine-tuning. In the contribution part, the author claimed "online fine-tuning is critical to improving text-to-image alignment while maintaining high image fidelity", but fidelity scores or IS scores are not reported in the paper. Also no human evaluation are conducted in the main paper. The cost for online RL fine-tuning is not reported. I believe it will be more cost-expensive compared with supervised fine-tuning, especially for scenarios such as fine-tuning on multiple prompts. [1] Training diffusion models with reinforcement learning Technical Quality: 3 good Clarity: 3 good Questions for Authors: How do you define the aesthetic scores and the ImageReward scores? Do you use the same metric as in the Imagereward paper? I am also curious about the CLIP/BLIP score with the proposed method which are often used for the evaluation. I think LoRA is proposed to supervised fine-tuning. Do you apply LoRA to both Supervised and RL fine-tuning? Also the UNet in the Stable Diffusion contains different layers including conv layers and cross-attention layers etc. Which part of the UNet module do you apply LoRA to? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely appreciate your valuable comments. We found them extremely helpful in improving our draft. We address each comment in detail, one by one below. **Comment 1. Novelty is limited, similar ideas have already been proposed in [1]** We respectfully disagree with the reviewer’s assessment. In fact, it is unfair to list the similarity to DDPO [1] as a weakness of our work, because 1) our work and DDPO [1] were developed independently, and 2) DDPO is an unpublished work, which appeared online only after the NeurIPS-2023 submission deadline. Moreover, we study the effect of KL-regularization (whose importance in fine-tuning foundation models has been well-established in the literature) in both online and supervised learning approaches, supported by theoretical justifications and experimental results, which has not been explored in the DDPO paper [1]. [1] Training diffusion models with reinforcement learning **Comment 2. Lacking experiments and fidelity score** Thanks for your feedback. Please check the overall author rebuttal and global response pdf. In these documents, we have included experiments that contain training with a large variety of prompts, showcasing that **our method can also improve the average reward on 104 MS-CoCo prompts and 183 Drawbench prompts (see Table 3 and Fig 1 in the response pdf).** Furthermore, we have included **human evaluation results on a single prompt (see Tables 1&2 in the response pdf)**, which show that our RL model outperforms the SFT model in terms of both alignment and image quality. Moreover, our RL model outperforms the original model in terms of alignment, while maintaining comparable image quality. As mentioned in the paper, we use the aesthetic predictor from LAION [2] to measure the image quality (fidelity), since we select some synthetic and challenging prompts for which we do not have ground truth images to compute FID. [2] Schuhmann et al. Laion-5b: An open large-scale dataset for training next generation image-text models. arXiv preprint arXiv:2210.08402, 2022. **Comment 3. The cost for training:** **We reported the cost for RL training in Appendix B**. It is true that RL training requires a relatively smaller learning rate, making it computationally more expensive. However, the increase in computational requirements is not significantly large as we also need a smaller learning rate for supervised fine-tuning to achieve higher image quality. **Comment 4. ImageReward** Yes, as mentioned in our paper, we use the same way to evaluate ImageReward and aesthetic scores as in the ImageReward paper. Also, it has been demonstrated that ImageReward is more correlated with real human decisions than CLIP/BLIP. As a result, we believe that the ImageReward score offers a more meaningful and relevant signal for evaluation purposes. **Comment 5. LoRA fine-tuning** We use LoRA fine-tuning for both RL and SFT. Also, we applied LoRA to the cross-attention layers. --- Rebuttal Comment 1.1: Comment: Thank you again for the review and we hope that our individual response (https://openreview.net/forum?id=8OTPepXzeh&noteId=pZZwGatqVk) and overall rebuttal (https://openreview.net/forum?id=8OTPepXzeh&noteId=UFj7ZsHm99) have addressed your main concerns. We would greatly appreciate your feedback and please feel free to let us know if you have any other questions in the discussion period!
Summary: This paper studies RLHF fine-tuning of diffusion models to learn from and align with human feedback. Specifically, they introduce (1) an online RL strategy and (2) a KL regularization (inspired by a similar regularization for the RLHF via PPO of LLMs). Empirically, they show better alignment with the optimized reward and increased aesthetic when compared to the more trivial offline imitation based RL (named SFT in the paper). Strengths: * The paper is clear, the theoretical section is sound, and the experiments are convincing. * This is arguably the first work applying online RL to fine-tune diffusion model, which is a key contribution. * I appreciated the MDP formulation of the diffusion process. * The inclusion of the KL divergence is interesting, though straightforward given the recent success of RLHF in NLP. Weaknesses: * Section 4.3 proposes 3 theoretical/intuitive reasons why online RL would perform better than SFT. I believe those reasons could be further validated in the experiments. - The first argument is that online learning favors exploration away from the pre-trained distribution. This could be ablated by saving the generated images along the online RL, and then applying SFT on these images. - The second argument is the difference in KL regularization, which actually theoretically comes to the difference between KL and reverse KL. Applying online RL with reverse KL (though more costly) could help ablate the importance of this component. Actually, I am sceptikal by the KL-D for SFT, as (1) the more standard KL-0 in App.E actually performs better and (2) offline RL usually requires baseline rewards for normalization, thus adjusting the original reward with a shift factor would not change anything. - The third argument is the robustness of the reward model. Therefore, ablations could better analyze the quality difference of the reward model on the pre-train distribution and then on the updated distribution. More generally, the fact that your online RL requires a more robust reward model is actually a drawback/limitation, creating new challenges in robust reward design. * More generally more experiments are required to ensure that online RL for diffusion models consistently helps. - The experiments are made only for a very small number of prompts: 4! This limits applications to real-world applications. - The experiments are made with a single reward model. More experiments with diverse (less robust) reward models would help. - Therefore the empirical contribution may be seen as marginal. - The paper lacks human evaluation, even though this was done for the reward model. I believe human evaluation after online RL vs SFT would have been more interesting. Technical Quality: 3 good Clarity: 3 good Questions for Authors: * Could you elaborate on the difference between KL and reverse KL. * Do you think it may be possible to add "online" in the title "Reinforcement Learning for Online Fine-tuning of Text-to-Image Diffusion Models", to specify that your contribution is actually about online RL. * I believe your SFT baselines could be improved with some reward normalization, value function and with KL-D. Such improved SFT baseline would strengthen your experimental sections. * Have you explored refined online RL optimization strategies such as PPO? * Finally, I believe more prompts (for example those used in evaluation) and more reward models (for example those used for aesthetic evaluation) are needed in training. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: The limitations could be extended to the societal risks of generative AI, and also mention the fact that online RL requires more robust reward models. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely appreciate your valuable comments. We found them extremely helpful in improving our draft. We address each comment in detail, one by one below. **Comment 1. Ablation study to verify the effect of exploring online samples** Response 1: This is an interesting question! However, we notice that **online/offline is not the only difference between RL & SFT training**: noticing that weighted ELBO (the RHS of Eq. (9)) is equivalent to a weighted score matching loss in [35], SFT is learning the score function with high rewards from a fixed denoising process, where RL model explores the trajectories with high rewards. As a result, even if we use SFT to train on images generated by the RL model, there are still other differences between such SFT and the RL training. In fact, **RL & SFT are two very different methods with many different aspects, and the first difference discussed in Section 4.3 is only one aspect**, so it could be hard to just select one aspect and conduct a clean ablation study due to other differences. One possible approach is to conduct offline RL by collecting trajectories generated from the original model and comparing it to the online RL, but it would not be a comparison between SFT and RL. We also expect that such an offline method would be outperformed by the online method. This is because the performance of the offline method is upper-bounded by the data coverage, and thus, online exploration would improve the model performance when the offline data coverage is not good. Finally, we want to mention that even if we intuitively treat SFT as an offline method, it is not an offline RL method, which is also discussed in Response 2. **Comment 2. Question about KL-D for SFT:** Response 2: Thank you for the question, but we think that your claim “the shift in the reward would not change anything in SFT” is not accurate. We would like to highlight that **there is no policy gradient in SFT**, and **subtracting a baseline without changing the expectation of the gradient estimation only applies to policy gradient methods**. Actually, the solution of the weighted ELBO (the RHS of Eq. (9), equivalent to the score matching loss in [35], which is an L2-loss between the actual score and the score function model) could change when $\gamma$ changes: under the assumption that reward is non-negative, the lowest possible reward is 0. If there are images with 0 reward in the SFT dataset, we will give 0 weight to the score matching loss of such images. If we add a positive shift, the reward will be non-zero and it will give a positive weight on learning the score function of such images. Empirically, as we can observe in Fig 7(a), increasing $\gamma$ will result in a lower reward for SFT training. Moreover, since SFT is not a policy gradient method, it will not benefit from baseline/value function tricks. For more results in KL-D, we present our full ablations result in Section E in the appendix. **Comment 3. The robustness of the reward model:** Response 3: To clarify, the third difference mentioned in Section 4.3 is under the assumption that we have a good reward model. Our point is that if we have a reward model which generalizes well or is even perfect, **SFT will not fully utilize that model by only evaluating the reward on a fixed dataset, whereas RL has the capability to leverage it more comprehensively**. Also, we consider the case when the reward model could be imperfect, which is **why we add KL-regularization to avoid over-optimization and out-of-distribution reward evaluation**. **Comment 4. More experiments with multiple prompts and human evaluations:** Response 4: Thanks for your feedback. Please check the overall author rebuttal and global response pdf. In these documents, we have included experiments that contain training with a large variety of prompts, showcasing that **our method can also improve the average reward on 104 MS-CoCo prompts and 183 Drawbench prompts (see Table 3 and Fig 1 in the response pdf).** Furthermore, we have included **human evaluation results on a single prompt (see Table 1&2 in the response pdf)**, which demonstrate that our RL model outperforms the SFT model in terms of both alignment and image quality. Moreover, our RL model outperforms the original model in terms of alignment while maintaining comparable image quality. **Comment 5. Elaborate on the difference between KL & reverse KL:** Response 5: The key difference is which distribution is used to evaluate the difference in the log probability. In KL for SFT, we evaluate using the offline distribution, which is the training data; in KL for RL, we evaluate using online generated trajectories. **Comment 6. Minor issue about the title** Response 6: Thanks for pointing it out! We will address this in the final draft. **Comment 7. Refined online RL optimization like PPO** Response 7: As mentioned in Appendix B, we tried a small learning rate and clipping the gradient norm to stabilize the training, which is similar to PPO. We also tried importance sampling and clipping the ratio, where we did not find much improvement in the learning process. We will clarify this in the final draft. --- Rebuttal Comment 1.1: Comment: Thanks for the clarifications and rebuttal. You state that "One possible approach is to conduct offline RL by collecting trajectories generated from the original model and comparing it to the online RL"; that would indeed be necessary to validate that online RL performs better than offline RL. Moreover, I still think that more ablation studies and additional experiments on diverse rewards is necessary. Overall, I read all responses and decided to keep my score. --- Reply to Comment 1.1.1: Comment: Thank you for your additional comments! **Q1. Offline RL vs Online RL** Following your suggestion, we fine-tune the model using offline RL: for the prompt “A green colored rabbit”, we employ an advantage-weighted regression as an offline policy learning algorithm [2] with trajectories generated by the original model. Specifically, we use an exponential weighted advantage objective that learns a policy maximizing the Q-values subject to a distribution constraint (Eq.(7) in [2]) and try different $\beta$ parameters (i.e., $\beta \in 1,2,3,10$). In our experiment, **offline RL only improves the average reward from ~0.1 to ~0.3 (after 5000 gradient updates), while our online RL approach can quickly improve the reward to ~1.5 before 5000 gradient updates.** Because offline RL is usually unstable and its performance is upper-bounded by data coverage, we find that offline RL is worse than online RL. We hope that this additional result can clarify your question. [2] Offline Reinforcement Learning with Implicit Q-Learning. Ilya Kostrikov, Ashvin Nair, Sergey Levine **Q2. Ablation study and rewards** Would you be able to provide more details regarding the specific ablation studies and reward functions that we should consider? We are committed to addressing this concern to the best of our ability. If you have any other questions or suggestions, please do not hesitate to let us know.
Rebuttal 1: Rebuttal: Overall author rebuttal: We thank all reviewers for their thoughtful comments. We greatly appreciate all the reviewers' acknowledgment that our method is **empirically effective with solid theories**. To address common concerns about scaling up the training prompts and human evaluation, we have add new experiments and evaluations: **1. Training with a large variety of prompts:** We conduct online RL training with 104 MS-CoCo prompts and 183 Drawbench prompts, respectively (the prompts are randomly sampled during training, and the full prompt dataset will be made public). We use the same configuration in Appendix B but with a longer training time till it generates 10K online samples (50K gradient steps). We report both ImageReward and the aesthetic score of the original and the RL fine-tuned models. For evaluation, we generate 30 images from each prompt and report the average scores of all images. **The evaluation result is reported in Table 3 with sample images in Fig 1 in the author rebuttal pdf, showing that RL training can also significantly improve the ImageReward score while maintaining a high aesthetic score with much larger sets of training prompts**. **2. Human evaluation:** As a supplementary evaluation to Fig. 3, we conduct extra human evaluation. We gather 40 images that are randomly generated from each prompt (“green rabbit”, “cat and dog”, “dog on the moon” and “four wolves”), resulting in a total of 160 images from each model (RL, SFT and the original model). Given two (anonymized) sets of four images from the same random seeds, one from RL fine-tuned model and one from the original model (or SFT model), we ask human raters to assess which one is better w.r.t. image-text alignment and fidelity (i.e., image quality). Each query is evaluated by 8 independent human raters and we report the average win/lose rate from RL&SFT comparison and RL&original-model comparison. **The results are presented in Table 1 & 2 in the author rebuttal pdf, showing that the RL model consistently outperforms the SFT model on both alignment and image quality and also outperforms the original model in the sense of alignment with comparable image quality**. We will also include all the results in the final draft. Pdf: /pdf/9e378185be60fa9c77c3bfc39ed077e39e6947bb.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: The paper proposes to adapt the popular RLHF framework used for fine-tuning LLMs to fine-tuning of diffusion generative models. Given a reward model, the goal is to finetune the parameters of the diffusion model such that the resulting images from the sampling process achieve high reward. Importantly, the reward is a function of the final output of the entire diffusion process (online finetuning) and can thus not be optimized independently across timesteps like the denoising loss used to train the LDM. The authors therefore adapt a previous result to compute the gradient without the need of backpropagating through the entire diffusion graph. Additionally, they regularize the optimisation via an upper bound on the KL divergence between the fine-tuned and the original model. Strengths: - I find the idea to adapt RLHF from language tasks to diffusion generative models to be very relevant as diffusion models clearly often misinterpret details like counts and colors about the prompt. - It is good that the authors added KL regularization to supervised finetuning to establish a better baseline for their RLHF framework. - Section 4.3 clearly distinguishes RLHF from supervised finetuning and gives several good reasons why RLHF is superior to supervised finetuning from a more theoretical perspective. - I find the quality of the results in the paper to be mostly convincing and I get the impression that RLHF significantly outperforms both baselines. Weaknesses: - The paper focuses mostly on finetuning on a single or few prompts at a time. Being able to train on a large variety of prompts to train a model that is overall better than SD instead of only focusing on a few specific aspects would greatly increase the usefulness of the paper. - The evaluation is somewhat limited and it is not clear how cherry-picked the prompts for the results are (for the given prompts the appendix contains non-cherry-picked results). It would be much more convincing if the authors could show results on challenging prompts that are randomly generated. Also, the quantitative results in Figure 3 would be more impressive if they were averaged over a larger selection of prompts. - Since the method is trained to improve ImageReward, I would prefer a different evaluation metric in Figure 3 to demonstrate generalization. If there is no automatic pipeline available, the authors could conduct a user study or simply generate prompts that are easy to evaluate by a human. For example, prompts of the form "{N} {CLASS} {BACKGROUND}" , where N is the number of objects and CLASS the object to generate. Then they could simply generate a certain number of images and manually count the ones where the model correctly generates the required number of objects. - In general, the evaluation is in my opinion the weakest part of the paper and I would gladly increase my score if the authors could present more convincing quantitative evaluations or a broader selection of qualitative results (or both). - While RLHF is significantly better than the baselines, to me some of the images, especially the "green rabbit" and "green cat" still look somewhat oversaturated. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - Is there a reason why eq (8) should not include a min, similar to (4)? - Do you believe that LoRA helps achieve better results or would standard finetuning on the original dimensional weights yield better results if compute would not be a concern? - From my experience, diffusion models have issues with multiple colors in a single prompt. For example a "green rabbit and a yellow bird in front of a blue background". You show that RLHF can improve colors but can it also handle multiple colors. Also diffusion models often fail with text, for example "a tshirt with NeurIPS printed on it", does RLHF help with this or does the reward function not yield useful feedback for this kind of prompt? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The main limitation of the paper is the restriction to a single prompt which is openly discussed by the authors. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely appreciate your valuable comments. We found them extremely helpful in improving our draft. We address each comment in detail below. **Comment 1. Fine-tuning on a single or few prompts at a time** Thanks for your suggestion! We added experiments that contain training with a large variety of prompts in global response pdf. The results show that **our method can improve the average reward on 104 MS-CoCo prompts and 183 Drawbench prompts**. For details, please check the overall author rebuttal and global response pdf (**Table 3 and Fig 1 in the response pdf**). **Comment 2. Human evaluation** Thanks for your suggestion! We conducted **human evaluation as a supplement of Fig 3** in global response pdf. In our human evaluation, **the RL model consistently outperforms the SFT model in terms of both alignment and image quality and also outperforms the original model in terms of alignment with comparable image quality**. For details, please check the overall author rebuttal and global response pdf (**Table 1&2 in the response pdf**). **Comment 3. LoRA fine-tuning vs. full fine-tuning** In fact, we first tried fine-tuning with all parameters which also works well, but we finally switched to LoRA fine-tuning for more efficient training. **Comment 4. Multiple colors & text generation** This is an interesting question! We believe the answer to the question “whether RLHF helps with such tasks” depends on the capability of the reward function. In the new experiments with multiple prompts, we find that **RL fine-tuning leads to enhanced reward scores (from 0.76 to 1.57) on text prompts with multiple colors (e.g., yellow vase and red book) and produces better-aligned images compared to the original model (see Fig 3(d) in the response PDF).** Regarding text rendering, although the reward scores show improvement (from 0.45 to 1.11), there is no substantial difference in the generated images. However, we expect that with a more sophisticated reward function that is capable of capturing finer details, RLHF can improve text rendering. These results support our claim that RLHF remains effective even in highly challenging scenarios, encompassing multiple colors and text rendering. **Comment 5. Minor issues like the adding min in eq(8)** Thanks for pointing them out! We will fix them in the draft. --- Rebuttal Comment 1.1: Comment: Thank you again for the review and we hope that our individual response (https://openreview.net/forum?id=8OTPepXzeh&noteId=usVNThWza4) and overall rebuttal (https://openreview.net/forum?id=8OTPepXzeh&noteId=UFj7ZsHm99) have addressed your main concerns. We would greatly appreciate your feedback and please feel free to let us know if you have any other questions in the discussion period!
null
null
null
null
null
null
Few-Shot Class-Incremental Learning via Training-Free Prototype Calibration
Accept (poster)
Summary: The paper proposes a strategy called Training-frEE calibratioN (TEEN) for Few-Shot Class-Incremental Learning (FSCIL) scenario to enhance the discriminability of new classes by fusing the new prototypes with weighted base prototypes. TEEN demonstrates remarkable performance and consistent improvements over baseline methods in the few-shot learning scenario. Strengths: The paper addresses the Few-Shot Class-Incremental Learning (FSCIL) scenario, which is a challenging and important problem in real-world scenarios. The proposed strategy, TEEN, is simple yet effective and demonstrates remarkable performance and consistent improvements over baseline methods in the few-shot learning scenarios. Weaknesses: The paper demonstrates significant improvement on new classes. However, based on the experimental results, it appears that the False Negative ratio, which involves classifying base instances into incorrect classes, may increase. It would be beneficial to address how this problem is handled and how the performance can be balanced between base classes and new classes. Additional analysis on this topic would be appreciated. As stated by the authors, pre-training on a dataset that is independent of the subsequent data distribution would be advantageous. The paper lacks a comparison of the proposed method with state-of-the-art methods in other few-shot learning scenarios, such as few-shot domain adaptation or few-shot semi-supervised learning. It would be beneficial to include more analysis and comparisons with these methods. Technical Quality: 3 good Clarity: 3 good Questions for Authors: See the weakness section for more details Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: See the weakness section for more details Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Q1** The paper demonstrates significant improvement on new classes. However, based on the experimental results, it appears that the False Negative ratio, which involves classifying base instances into incorrect classes, may increase. It would be beneficial to address how this problem is handled and how the performance can be balanced between base classes and new classes. Additional analysis on this topic would be appreciated **A1** We thank the reviewer for the constructive question. Firstly, we comprehensively analyze the potential negative effect of TEEN in section 4.2. From the final benchmark results presented in Section 5, it can be observed that **the potential negative effect is overshadowed by the more substantial positive effect of the improvement in the new prototypes**. Furthermore, we include a hyper-parameter $\alpha$ to adjust the strength of the calibration. We give a detailed ablation study of $\alpha$ in Figure 5. The hyper-parameter $\alpha$ temporarily provides a simple solution to ensure performance and more complicated methods to mitigate the increasing False Negative ratio may be designed in the future. --- **Q2** As stated by the authors, pre-training on a dataset that is independent of the subsequent data distribution would be advantageous. The paper lacks a comparison of the proposed method with state-of-the-art methods in other few-shot learning scenarios, such as few-shot domain adaptation or few-shot semi-supervised learning. It would be beneficial to include more analysis and comparisons with these methods. **A2** We thank the reviewer for the suggestion and the potentially related field. To the best of our knowledge, conventional methods in the few-shot scenarios do not consider the process of *incremental learning.* These methods were not designed with the continuous arrival of new data in mind, **making them unsuitable for FSCIL tasks**. If there are any papers closely related to the cross-dataset few-shot class-incremental learning scenario, we would appreciate it if you could directly point out those relevant papers and we will discuss these papers in the final version. Additionally, there is currently no strict definition for cross-dataset few-shot class-incremental learning (e.g., the selection of datasets or choice of evaluation metrics). In fact, we consider the cross-dataset setup as a general assumption and limitation in all FSCIL problems. Hence, in the **limitations** section, we mention this as a potential future research direction. However, to better demonstrate the performance of TEEN, we attempted a simple cross-dataset experiment by constructing a **CUB200->miniImageNet** dataset comparison to address your potential concerns. The results are shown in the global response (**Cross-dataset few-shot class-incremental learning scenario**). Notably, TEEN still shows competitive performance. We speculate that existing methods (e.g., CEC and FACT) often employ complex learning algorithms that lead to excessive adaptation to the pre-training dataset. In contrast, TEEN relies on the calibration of semantic similarity, which may alleviate the potential negative impact of dataset shift.
Summary: This paper tackles the problem of few-shot class incremental learning based on prototypical network. Motivated by the observation that novel classes are easily misclassified as base classes, the authors propose a prototype calibration strategy. The calibrated prototype is a weighted sum of prototype computed from novel class support set and base classes semantically similar with the novel class. Experiments on miniImageNet, CUB and CIFAR demonstrate the superiority of the proposed method over previous ones. Strengths: 1. The paper is well motivated that the inferior performance of prototypical network results from the phenomenon that novel classes are easily misclassified as base classes. The phenomenon is well-studied via experiments. 2. The paper is well written and easy to follow. 3. The authors propose a simple but effective calibration strategy to improve the performance. Weaknesses: 1. The paper introduces two hyper parameters which need to be exhaustively searched for every dataset. 2. There lacks cross-dataset experiments to show the effectiveness of the proposed method when transferring knowledge from one dataset to another. Technical Quality: 3 good Clarity: 3 good Questions for Authors: The semantic similarity based calibration is a straightforward and commonly used method used in the classification area. There may lack technical contribution to the community. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: The limitation of cross-dataset evaluation is discussed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Q1** The paper introduces two hyper parameters which need to be exhaustively searched for every dataset. **A1** We thank the reviewer for the constructive question. We show the performance trend with respect to $\alpha$ and $\tau$ on each benchmark dataset in **Figure 2 in the Rebuttal PDF**. To better illustrate the trends, we have included the replicated trend in figure 5 (a)) in this rebuttal pdf. We observed that the trend of the hyperparameter $\alpha$ remains consistent across all benchmark datasets. Although the trend of the hyperparameter $\tau$ may vary slightly between the CUB200, CIFAR100, and miniImageNet datasets, maintaining uniform settings still allows TEEN to maintain competitive performance. Notably, the trend of $\tau$ in miniImageNet and CIFAR100 datasets is identical. We hypothesize that the slight difference observed in CUB200 may be attributed to the fact that it is a *fine-grained* dataset. --- **Q2** There lacks cross-dataset experiments to show the effectiveness of the proposed method when transferring knowledge from one dataset to another. **A2** In fact, we introduce this more realistic FSCIL scenario (e.g., pre-training and fine-tuning on *independent* datasets) in our **limitation** part. We believe that this is a limitation currently present in mainstream FSCIL settings and could be a potential research direction. This limitation can be viewed as an assumed constraint in all existing FSCIL methods, which can be further explored in future research. Nevertheless, we construct a more strict FSCIL scenario to evaluate the effectiveness of TEEN. Please refer to the global response (**Cross-dataset few-shot class-incremental learning scenario**) for detailed results for this part. Specifically, we pre-train the model on the base classes of the CUB200 dataset and incrementally learn the few-shot tasks in the miniImageNet dataset. The comparison results show that TEEN can also perform competitively. We speculate that existing methods often overly rely on specialized learning processes on the pre-training dataset. For example, methods like CEC require learning a classifier adaptation module on the base classes. These modules, which are excessively trained on the pre-training dataset, may result in poor performance of existing methods in incremental learning scenarios with dataset shift. In contrast, *TEEN does not introduce additional modules for adaptation*. It simply leverages the potential semantic similarity between classes to calibrate the prototypes. This training-free characteristics allow TEEN to maintain competitive performance in scenarios with dataset shifts. --- **A3** The semantic similarity based calibration is a straightforward and commonly used method used in the classification area. There may lack technical contribution to the community. **Q3** Different from existing semantic-based calibration methods, we propose a specific *training-free* calibration method (TEEN) to improve the discriminability of novel classes. Notably, the feature extractor used in TEEN is *only* trained on base classes and *does not involve any extra module or data to characterize the semantic similarity*. Besides, the empirical analysis and observations in our paper are also meaningful in the scenario of FSCIL. In our global response, we reiterate the contributions and novelty of our paper. Please refer to the global response for a detailed conclusion. --- Rebuttal 2: Comment: The rebuttal resolves my concerns and I would like to keep my original score (5: Borderline accept). --- Rebuttal Comment 2.1: Title: Response to 3Zyg Comment: We greatly appreciate your support and welcome further discussion if you have any additional questions or concerns.
Summary: The authors work on the Few-Shot Class-Incremental Learning (FSCIL) scenario and propose the Training-frEE calibratioN (TEEN) strategy. This strategy enhances the discriminability of new classes by fusing the new prototypes with base prototypes. This approach is different from previous methods, which either introduce extra learnable components or rely on a frozen feature extractor. Strengths: 1. The authors observed that the feature extractor, although only trained on base classes, can surprisingly represent the semantic similarity between the base and unseen new classes. Accordingly, they proposed a solution. 2. The paper is easy to follow, with clear experimental details that aid reproducibility. The motivations are also presented clearly. Weaknesses: 1. The issue of misclassification has already been observed and studied in previous few-shot learning works. Therefore, the poor performance of new classes is not surprising in the task of Few-Shot Class-Incremental Learning (FSCIL). 2. Forming connections across samples from base and novel classes is not a new concept. For instance, FADI[1] discovered that a novel class may implicitly leverage the knowledge of multiple base classes to construct its feature space. It then builds a discriminative feature space for each novel class via association and discrimination steps. [1] Few-Shot Object Detection via Association and Discrimination, Yuhang Cao et al., NeurIPS 2021. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: I can understand that the prototype of a novel class is biased due to the lack of samples, and it can be easily misclassified as a base class. Intuitively, if the novel class is prone to be classified as the most similar base class, interpolation among such features could exacerbate this issue. But why is the use of weighted base prototypes to calibrate novel ones beneficial in enhancing the discriminability of such a prototype while reducing the discriminability of base prototypes (lines 217-218, 238-239)?. And could you please clarify the definition of discriminability? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: As mentioned in the weaknesses section, the proposed idea is not novel at all. It unfortunately is just a tiny incremental contribution which is rather insufficient for a publication in top tier conference such as NeurIPS. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Q1** The issue of misclassification has already been observed and studied in previous few-shot learning works. Therefore, the poor performance of new classes is not surprising in the task of Few-Shot Class-Incremental Learning (FSCIL). **A1** We thank the reviewer for the suggestion. **We must emphasize that our paper not only illustrates the phenomenon of low performance on new classes but also investigates the underlying causes behind this phenomenon**. Furthermore, based on our analysis of this phenomenon, we propose our approach (i.e., TEEN). These analyses are unique in the FSCIL field and we believe that these analysis can provide new insights for the FSCIL field. Besides, if there are some papers that closely resemble our experimental analysis and methods, we would appreciate it if you could directly point out those relevant papers. We will include a detailed discussion of these papers in the final version of our paper. --- **Q2** Forming connections across samples from base and novel classes is not a new concept. For instance, FADI[1] discovered that a novel class may implicitly leverage the knowledge of multiple base classes to construct its feature space. It then builds a discriminative feature space for each novel class via association and discrimination steps. [1] Few-Shot Object Detection via Association and Discrimination, Yuhang Cao et al., NeurIPS 2021. **A2** Thank you for providing this helpful citation. We differentiate our method from [1] based on the following perspectives: - **Setting and Motivation**: [1] focus on the Few-Shot Object Detection (**FSOD**) task and argue that directly fine-tune the model pre-trained on all base classes with abundant samples will lead a inferior performance of new classes. However, it donot analyze the advantages of connecting new and base classes in the FSOD scenario. In other words, the motivation behind connecting base and novel classes in the FSOD scenario is unclea, and there is a lack of experimental insights in [1]. In contrast, our paper focuses on the **FSCIL** task and first explore the reasons for the low performance of new classes in FSCIL. We find that new classes are generally easily misclassified into the most similar base classes, and we also empirically find that feature extractors are still able to characterize the semantic similarity between new and old classes even if they have not been trained on new classes. Based on these unique analyses, we achieve the goal of calibrating the new class prototype with the old class by utilizing feature extractors that have only been trained on the base class and its implied semantic similarity - **Methodology**: [1] adopt a two-stage (i.e., association and discrimination) learning method to learn a more discriminative novel classifier. However, it adopt the WordNet [2] as an auxiliary to describe the semantic similarity of each classes and use the Lin Similarity. Besides, [1] align the novel classes to the most similar base class. After that, [1] adopt a *Set-Specialized Margin Loss* to explicitly improve the discriminability. In contrast, our method TEEN propose to leverage the feature extractor trained *only on base classes* to characterize the semantic similarity between the base and novel classes and donot extra semantic characterization model, which is more simple. Besides, TEEN directly used the cosine similarity to calibrate the new prototypes based on the weighted base prototypes and donot involve any training module or stage, which is more efficient. [2] George A Miller. Wordnet: a lexical database for english. --- **Q3** why is the use of weighted base prototypes to calibrate novel ones beneficial in enhancing the discriminability of such a prototype while reducing the discriminability of base prototypes (lines 217-218, 238-239)?. And could you please clarify the definition of discriminability? **A3** Thanks for your question. We will address your questions separately in two ways. - **Firstly**, we clarify the definition of discriminability in the FSCIL context. Here, when we talk about *discriminability*, we mainly refer to **prototypes**, which are the class centers of each class. From the results, if the classification accuracy of a class improves, we can infer that the discriminability of that category's prototype has also improved. - **Secondly**, we need to emphasize that in FSCIL, the model is required to recognize both new and base classes *simultaneously*. In our approach, we utilize semantic similarity to calibrate the new class prototypes based on a weighted combination of prototypes from semantically related base classes. Intuitively, this process brings the new class prototypes closer to the base class prototypes to a certain extent. Consequently, some samples from the base classes may be mistakenly classified as the calibrated new classes. We comprehensively analyze the potential negative (in Figure 3 and section 4.2) and positive effect of our calibration method (section 5), which show the efficiency of TEEN. Please refer to section 5.3 for the effectiness of semantic-based weight. --- **Q4** The proposed idea is not novel at all. **A4** We recapitulate our contributions. First, we empirically find that new class performance was much lower than the base class in previous methods. Then, we explore the causes of this phenomenon. These analyses are first proposed in the FSCIL field, which points out that we should pay more attention to the performance of new classes. Besides, we observe the feature extractor trained on base classes can also depict the semantic similarity between the base and new classes. Based on these analyses, we propose TEEN, which not only achieved a higher average accuracy but also improved the accuracy of new classes (**10.02% ∼ 18.40% better than the runner-up method**). Finally, we also validate TEEN on the Few-Shot Learning (FSL) task, which can also show competitive performance. --- Rebuttal Comment 1.1: Comment: 1. I thought that the reasons behind misclassification from base to novel classes are two-fold: 1) the standard training method for deep neural networks typically involves empirical risk minimization (ERM) [V. Vapnik. Principles of risk minimization for learning theory. In NeurIPS], and 2) misclassifying novel classes has much less risk compared to base classes because the number of samples belonging to base classes is much larger than the number of samples belonging to novel classes. Besides, it is intuitive that the classifier misclassifies the new classes to their corresponding most similar base classes. So, that is why FADI used an association step to build the bridge between base and novel categories based on the most similar semantic information between them. 2. I disagree with the statement that 'FADI does not analyze the advantages of connecting new and base classes in the FSOD scenario. In other words, the motivation behind connecting base and novel classes in the FSOD scenario is unclea'. Actually, this paper provided the benefits of associating base and novel classes via pseudo-dual labels, and it also explained the motivation for this process. 3. Based on your conclusion that new classes are prone to their most similar base classes, why using the most similar prototype of the base class reduces the performance on novel classes, as shown in Figure 5d. --- Reply to Comment 1.1.1: Title: Further Clarification of Reviewers' Concerns: Providing Deeper Insights Comment: Thank you for your response. Before addressing your concerns individually, we would like to re-emphasize that [1] and TEEN are focused on different tasks, namely FSOD and FSCIL, respectively. Furthermore, let us recapitulate the main differences between [1] and TEEN. - First, [1] heavily relies on an external semantic similarity source, such as WordNet, to identify the most similar base class for each novel class. As mentioned in [1], the assigning policy is a crucial component in the association step. Notably, [1] associates **only one** most similar base class with each corresponding new class. In contrast, TEEN substantiates the overlooked characteristics (i.e., the feature extractor only trained on base classes can also characterize the semantic similarity between the base classes and unseen novel classes) of the feature extractor through quantitative and qualitative analysis, thus breaking free from the reliance on additional similarity characterization tools. Furthermore, TEEN utilizes the semantic similarity captured by the feature extractor to weight the prototypes of the base classes. It then calibrates the prototypes of the new classes based on the weighted semantic information from the base classes. - Secondly, in the FSOD task, the method proposed in [1] leverages the reuse of base class samples for balanced fine-tuning, leading to an alignment of the feature distribution. In contrast, TEEN does not utilize any base class samples or employ additional training modules when performing incremental recognition of the novel classes. **A1**: First and foremost, it is essential to emphasize that in the current FSCIL task, fixing the feature extractor is a highly common practice. Furthermore, not all methods involve complex ERM (Empirical Risk Minimization) training during incremental learning. As stated in section 2.2, these methods typically adopt a frozen feature extractor and utilize the prototypes for new classes to achieve the recognition of novel classes. Therefore, the low performance of the novel classes in these methods may not be directly explained by the training process, where misclassifying new classes as old classes could easily degrade ERM. Besides, *we agree that it may not be intuitive for the novel classes to be misclassified into the most similar base classes*. The low performance of the novel classes could also be attributed to inaccurate representations of the individual novel classes, leading to misclassifications among the novel classes themselves. We supported our conclusions through quantitative analysis based on empirical results in Section 3 rather than relying solely on subjective intuition. We believe these experimental analyses are meaningful for understanding tasks related to few-shot scenarios. **A2**: We apologize for your confusion. Our work delves into the advantages of connecting novel and base classes from an empirical standpoint, which is not thoroughly investigated in FSDI. In Section 3, we conducted comprehensive experiments to quantitatively explore and draw conclusions based on our observations, which motivated our method. In contrast, the motivation provided in FSDI primarily relies on qualitative reasoning. We believe that our experimental analysis is insightful and valuable. **A3**: In Figure 5d, we demonstrate that the utilization of semantic-based similarity (as shown in Eq5 and Eq6) and directly aligning new classes to their respective most similar $K$ base classes (as described in SimpleTEEN in Section 5.3) does not effectively improve the performance of the new classes compared to TEEN. As previously mentioned in the comment, we conducted a detailed analysis of the misclassifications of new classes. We collected statistics on how novel classes were misclassified into **the top 10 most similar base classes** (i.e., Table 2) rather than just one base class. This finding indicates that while the feature extractor trained solely on base classes can capture semantic similarity to some extent, it may not accurately identify the single most similar base class. Therefore, we propose calibrating the novel prototypes by the *weighted* base prototypes with semantic-based similarity. We appreciate your constructive feedback and will provide a more detailed discussion in the final version. We hope that our response has addressed your concerns and alleviated any doubts you may have had. If you have any further questions or require additional clarification, we are more than happy to engage in further discussion.
Summary: This paper presents a novel training-free calibration approach for few-shot class incremental learning. The authors make an observation that one main problem with FSCIL is that data of new classes can be easily mis-classified as base session classes. By utilizing the well-calibrated embeddings of base session classes to help embeedings of the new classes, the author improve the performance of FSCIL. Strengths: 1. The observation of new classes being confused as old classes is somewhat novel. 2. The proposed classifier-calibration method is interesting and novel. 3. The proposed method is simple yet effective. 4. Detailed ablation studies have been performed on the introduced hyper-parameters. Weaknesses: 1. One of the reviewer's concern is in terms of the robustness of the method to hyper-parameter \alpha and \tau. Are the optimal hyper-parameters the same for different datasets or different incrementing procedures (i.e., how many classes per incremental session)? 2. In addition, the author should provide more detailed justification for the design of the method. Why use this simple linear interpolation for calibration? Are there any empirical observations or theory to support this design? Technical Quality: 3 good Clarity: 2 fair Questions for Authors: 1. I suggest the author provide more ablation study on robustness of hyper-parameter on more dataset. 2. I suggest the author provide more justification on the designing of the method (i.e., why this simple linear interpolation form). Such justification can be empirical visualizations, theoretical analysis, etc. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: The authors have adequately addressed the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Q1** One of the reviewer's concern is in terms of the robustness of the method to hyper-parameter $\alpha$ and $\tau$. Are the optimal hyper-parameters the same for different datasets or different incrementing procedures (i.e., how many classes per incremental session)? **A1** We are sorry for your confusion. In fact, we clearly show the the value of $\alpha$ and $\tau$ on different benchmarks in line 266. The optimal value of $\tau$ is stable for all benchmark datasets and the $\alpha$ is different. A more detailed ablation studies on the $\alpha$ and $\tau$ are shown in Figure 5. Regarding the number of classes per incremental session, we introduce the base experiment setting in lines 254 to 256 and provide detailed settings in Section 4.1 of the supplementary material. Please refer to the corresponding section mentioned above for the relevant experimental settings. --- **Q2** In addition, the author should provide more detailed justification for the design of the method. Why use this simple linear interpolation for calibration? Are there any empirical observations or theory to support this design? **A2** We are sorry for your confusion. The format of linear interpolation is simple and widely-used in different machine learning or deep learning fields, such as mixup [1] and LVQ algorithm [2]. A simple and intuitive understanding is that the form of linear interpolation *pulls a vector closer towards another vector in terms of direction*. To visualize this more, we have used a toy example from 2d gaussian distribution for a simple demonstration. Please refer to **the Figure 1 in the Rebuttal PDF** for a more detailed introduction. In addition to this empirical interpretation, the empirical observation in Section 3 also strongly supports our design. Specifically, we empirically find the reason for the low performance of the new class is that it is heavily misclassified into the most similar base classes. Therefore, we can improve the performance of the new class by pulling the biased new prototypes towards the prototypes of the most similar base class. [1] Zhang, Hongyi, et al. mixup: Beyond Empirical Risk Minimization. https://arxiv.org/abs/1710.09412 [2] Learning vector quantization. https://en.wikipedia.org/wiki/Learning_vector_quantization --- **Q3** I suggest the author provide more ablation study on robustness of hyper-parameter on more dataset. **A3** We thank the reviewer for the suggestion. Please refer to **the Figure 2 in the Rebuttal PDF** for ablation studis of hyper-parameters (i.e., $\alpha$ and $\tau$) on more datasets. To better illustrate the trends, we have included the replicated trend figure5 (a)) in this rebuttal pdf. We observed that the trend of the hyperparameter $\alpha$ remains consistent across all benchmark datasets. Although the trend of the hyperparameter $\tau$ may vary slightly between the CUB200, CIFAR100, and miniImageNet datasets, maintaining uniform settings still allows TEEN to maintain competitive performance. Notably, the trend of $\tau$ in miniImageNet and CIFAR100 datasets is perfectly identical. We hypothesize that the slight difference observed in CUB200 may be attributed to the fact that it is a *fine-grained* dataset. --- Rebuttal Comment 1.1: Comment: Dear reviewer wHh2, A detailed author rebuttal is in. Please share your thoughts post rebuttal. Thanks. AC --- Rebuttal 2: Title: For requests for further discussion Comment: We appreciate your constructive feedback and have provided corresponding responses. We are more than willing to address any further questions or concerns you may have. Please feel free to ask anything or discuss any further points of clarification. --- Rebuttal Comment 2.1: Comment: Thanks for the clarification! I think the author's response have mostly addressed my concerns. Therefore, I would like to increase the score to 5. --- Reply to Comment 2.1.1: Title: Appreciation for Reviewer WHh2 Comment: Thank you for adjusting your score. We are happy we managed to address your concern.
Rebuttal 1: Rebuttal: We express our profound gratitude to the reviewers for their insightful and valuable comments. We are pleased that the reviewers find the simplicity and efficiency of the proposed method TEEN (RFbX, wHh2, 3Zyg, TNew) and our work clear and easy to follow (RFbX, 3Zyg, aEVg). They also consider our method and empirical analysis interesting, novel and well-motivated (wHh2, aEVg, 3Zyg). The reviewer wHh2 acknowledges our detailed ablation studies. In particular, reviewer RFbX acknowledges that our approach **takes a new class orientation, pushing the frontier in previously underexplored aspects and facilitating a fair and comprehensive comparison** and praises our work as **attractive**. Before our rebuttal, we recapitulate our *contribution, novelty and some easily overlooked shining points*: - We empirically find that new class performance was much lower than base class in previous methods. Then, we explore the causes of this phenomenon. *These analyses are first proposed in the FSCIL field, which points out that we should pay more attention to the performance of new classes*. - We also observe that the feature extractor, solely trained on base classes, can capture the semantic similarity between the base and new classes. Based on these analyses, we propose TEEN, which not only achieved a higher average accuracy but also improved the accuracy of new classes. - In addition to the FSCIL task, we also validate TEEN on the Few-Shot Learning (FSL) task, which can also show competitive performance. In contrast, existing methods lack effectiveness for **both** FSCIL and FSL tasks. To better address the reviewer's concerns, we conducted experiments primarily focusing on three aspects: cross-dataset evaluation, hyper-parameter robustness and the intuitive explanation of TEEN. - **Cross-dataset few-shot class-incremental learning scenario**. We *reuse* the feature extractor pretrained on the base session of **CUB200** and use this feature extractor for the incremental learning on **miniImageNet** datasets. Specifically, Session 0 consists of 100 classes from the CUB200 dataset, while the incremental sessions (Session 1-6) include classes from the miniImageNet dataset in a 5-way 5-shot format. In the cross-dataset incremental learning scenario, the performance of CEC and FACT show a significant drop, while **the TEEN continues to perform competitively**. We speculate that these methods (CEC and FACT) may have designed overly complex modules (e.g., classifier adaptation module in CEC) or mechanisms (e.g., the forward-compatible paradigm in FACT), which led to excessive adaptation to the pre-training dataset. In contrast, TEEN relies solely on the underlying semantic similarity relationships to calibrate the new class prototypes, which may make TEEN more resilient to dataset shifts. | | 0 (CUB200) | 1 | 2 | 3 | 4 | 5 | 6 | | :--: | :--------: | :---: | :---: | :---: | :---: | :---: | :---: | | CEC | 75.85 | 65.34 | 63.21 | 61.30 | 59.24 | 57.29 | 55.21 | | FACT | 75.90 | 63.02 | 60.12 | 58.42 | 55.32 | 53.54 | 51.48 | | TEEN | 77.26 | 68.42 | 66.32 | 65.19 | 62.14 | 59.43 | 58.56 | - **The robustness of the hyper-parameters $\alpha$ and $\tau$**. We show the performance trend with respect to $\alpha$ and $\tau$ on each benchmark dataset **in the Rebuttal PDF (Figure 2)**. To better illustrate the trends, we have included the replicated trend figure 5 (a)) in this rebuttal pdf. We observed that the trend of the hyperparameter $\alpha$ remains consistent across all benchmark datasets. Although the trend of the hyperparameter $\tau$ may vary slightly between the CUB200, CIFAR100, and miniImageNet datasets, maintaining uniform settings still allows TEEN to maintain competitive performance. Notably, the trend of $\tau$ in miniImageNet and CIFAR100 datasets is perfectly identical. We hypothesize that the slight difference observed in CUB200 may be attributed to the fact that it is a *fine-grained* dataset. - **An example that provides an intuitive explanation of TEEN using a toy example**. To better help reviewers understand the design of our calibration method, we show an intuitive visualization using a toy dataset sampled from a 2D Gaussian distribution (**Figure 1 in Rebuttal PDF**). Specifically, we constructed four classes, each corresponding to a Gaussian distribution. Three of these classes served as the base classes (with *a large number of* samples), while one class represented the new class (with *a small number of* samples). By applying TEEN to the new class, we can visually observe that the calibrated prototypes of the new class are closer to the desired class prototypes than their initial uncalibrated counterparts. **We have included the requested comparison and discussions in the corresponding answers and the rebuttal PDF**. The relevant papers mentioned in the response or any further detailed discussions will be included in the final version. Please check the answers to specific comments and Rebuttal PDF for details. Pdf: /pdf/19195bbe189935c6a9818df014fbde3fc13f3544.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: The paper closely looks at problems in prototypical networks and methods in the context of few shot class incremental learning. The authors observe semantic similarity between new and base prototypes along with new classes being misclassified regularly as semantically similar base classes. Towards this they use a training free calibration strategy to guide the new ill calibrated prototypes using the base prototypes. Essentially bridging the lack of training data and suppressing bias. Strengths: - The major strength of the paper lies in the simplicity of the method. Instead of compromising the base class performance they approach the problem from a new class orientation. Doing so relieves the method from the convoluted approaches of feature space reservation[1], meta learning schemes[2] or self supervision[3]. Instead the methods strength lies in utilising the uncompromised power of the base prototypes. - The fact that almost all evaluation metrics one way or the other focus on the novel class performance and not the less interpretable average accuracy and performance decay rate makes the paper even more attractive. This shows commitment towards pushing the frontier in aspects previously underexplored and facilitating a fair and thorough comparison. - The work is constructed clearly and simply. It seems obvious that the authors followed a train of thought. They end up convincing the reader on why each module is essential. It generally reads very smoothly, hopping from hypothesis to observations to results and repeats. For example section 3 which tries to “Understand the reason for poor performance in new classes” leads smoothly into the observations for that section which motivates the section on calibration (section 4) and this section takes a similar hypothesis-observation-result route. - The fact that almost all evaluation metrics one way or the other focus on the novel class performance and not the less interpretable average accuracy and performance decay rate makes the paper even more attractive. This shows commitment towards pushing the frontier in aspects previously underexplored and facilitating a fair and thorough comparison. Weaknesses: - Session 4 uses a variety of terminology. Using a consistent term for each prototype could potentially read better. (For example ill-calibrated new prototypes, new prototypes, biased prototypes) - In relation to prior work it seems only Section 1 is truly dedicated to the literature. And even then it seems the prior works that this paper branches out from are not discussed in appropriate detail. Prior methods from the 3 other papers from Table 1 and Table 2 are not detailed in any preceding or subsequent section. Their would be reason to do so as the method directly criticises (line 86-87 or line 190-191) prior works. Prefacing in better detail how previous methods mitigate biases could be essential in understanding the state of the literature and motivate TEEN. - [4] show in “Simpleshot” that L2 normalisation of prototypes improves performance. This is also confirmed by [5] in their nearest neighbour method for few shot learning. The authors overlooked this crucial feature transformation which is commonly conducted in previous studies on Non Parametric methods for FSCIL for example the NoNPC method by Oh et al. [6] This omission somewhat limits the comprehensiveness of their findings despite the improved results given the prevalnece of the method. - [7], [8] are missing comparisons. Therefore, the tables that compare the method to the current sota would should less prominent results. Minor Remarks and Typos: - 213: “rely” should be “relies” - 2: “scarce” - The use of tau parameter in equation (5) might be better understood if shifted to equation 6 as S_{b,n} is a scalar and in any case it seems more common that a temperature scaling parameter appears in the softmax. Would recommend to make that alteration. - 73: “neglect” should be “negligence” [1] Zhou, Da-Wei, et al. "Forward compatible few-shot class- incremental learning." Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2022. [2] Zhang, Chi, et al. "Few-shot incremental learning with continually evolved classifiers." Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2021. [3] Ahmad, Touqeer, et al. "Few-shot class incremental learning leveraging self-supervised features." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022. [4] Yan Wang, Wei-Lun Chao, Kilian Q Weinberger, and Laurens van der Maaten. Simpleshot: Revisiting nearest-neighbor classification for few- shot learning. arXiv preprint arXiv:1911.04623, 2019. [5] Wang, Guangpeng, and Yongxiong Wang. "Self-attention network for few-shot learning based on nearest-neighbor algorithm." Machine Vision and Applications 34.2 (2023): 28. [6] Oh, J., & Yun, S.-Y. (2022). Demystifying the Base and Novel Performances for Few-shot Class-incremental Learning. http://arxiv.org /abs/2206.10596 [7] Peng, Can et al. Few-Shot Class-Incremental Learning from an Open-Set Perspective, ECCV 2022 [8] Yibo Yang, Haobo Yuan, Xiangtai Li, Zhouchen Lin, Philip Torr, and Dacheng Tao. Neural collapse inspired feature-classifier alignment for few-shot class-incremental learning. In International Conference on Learning Representations 2023 Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: - See weaknesses. - Discussions and comparisons to the most recent sota are necessary Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: The authors have adequately mentioned the limitations. And motivate the setting where the pretrain base classes being from different data distribution as the incremental new classes is actually a more realistic setting. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Q1** Session 4 uses a variety of terminology. Using a consistent term for each prototype could potentially read better. (For example ill-calibrated new prototypes, new prototypes, biased prototypes) **A1** We thank the reviewer RFbX for the kind and meaningful writing suggestions. We will fix the terminology and typos in the final version. --- **Q2** In relation to prior work it seems only Section 1 is truly dedicated to the literature. And even then it seems the prior works that this paper branches out from are not discussed in appropriate detail. Prior methods from the 3 other papers from Table 1 and Table 2 are not detailed in any preceding or subsequent section. Their would be reason to do so as the method directly criticises (line 86-87 or line 190-191) prior works. Prefacing in better detail how previous methods mitigate biases could be essential in understanding the state of the literature and motivate TEEN. **A2** We thank the reviewer RFbX's constructive suggestions. We summarize the ProtoNet, CEC and FACT in the response and a more detailed introduction will be included in the final version. These three baselines freeze the feature extractor to mitigate the forgetting problem in incremental learning scenarios and adopt the prototype-based classifier to mitigate the overfitting problem in the few-shot scenario. We will summarize their differences separately. - The ProtoNet is a popular baseline method in the FSCIL field. It only pre-trains the feature extractor on base classes with a vanilla cross-entropy loss and plugs the prototype of novel classes to jointly recognize the base and novel classes. - The CEC follows a meta-learning paradigm and construct some pseudo few-shot task in the pre-training stage. Based on these pseudo few-shot tasks, CEC trains a classifier adaptation module to adapt new classes in the incremental sessions. - The FACT follows a forward-compatible paradigm to preserve some feature space for the incoming novel classes. **Summary**: Although these FSCIL methods achieve a competitive average performance of base and new classes, we observe that these methods still perform poorly on new classes (See section 3). Our method TEEN aims to *explicitly* mitigate the problem of poor performance on new classes and bridge the gap between base and novel classes and finally achieve better average performance. --- **Q3** [4] show in “Simpleshot” that L2 normalisation of prototypes improves performance. This is also confirmed by [5] in their nearest neighbour method for few shot learning. The authors overlooked this crucial feature transformation which is commonly conducted in previous studies on Non Parametric methods for FSCIL for example the NoNPC method by Oh et al. [6] **A3** We thank the reviewer RFbX for providing the related works. We agree that the widely-used $L_2$ normalization of the prototypes is beneficial in few-shot related tasks. Therefore, we do not overlook yet adopt this normalization in Line219-Line220 (Eq6). Besides, although [6] also adopts the $L_2$ normalization. **There still exists main differences between our paper and NoNPC [6]**. For example, we explicitly *observe and explore the reason for the low performance of new classes* in the FSCIL scenario, which is absent in [6]. Based on our **unique** empirical analysis, we propose *explicitly* calibrating the novel prototypes through the implicit semantic similarity overlooked by existing works with a *training-free* method. A more detailed discussion with NoNPC will be included in the final version. --- **Q4** [7], [8] are missing comparisons. Therefore, the tables that compare the method to the current sota would should less prominent results. **A4** We appreciate the valuable suggestions from the reviewers regarding the strong baseline methods. We acknowledge that TEEN may not directly outperform the missing baseline methods [8]. However, in our supplementary material, we demonstrate the potential of TEEN as a *plug-and-play* module in Figure 1 and Figure 2. We have tried to integrate TEEN with the mentioned method NC-FSCIL and fine-tune TEEN's two hyper-parameters in the evaluation stage of NC-FSCIL [8]. The plug-and-play nature of TEEN also allows it to achieve competitive performance. | | 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | | :--------------: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | | ALICE [7] | 80.60 | 70.60 | 67.40 | 64.50 | 62.50 | 60.00 | 57.80 | 56.80 | 55.70 | | NC-FSCIL [8] | 84.02 | 76.80 | 72.00 | 67.83 | 66.35 | 64.04 | 61.46 | 59.54 | 58.31 | | NC-FSCIL w/ TEEN | 84.06 | 76.82 | 72.12 | 67.86 | 66.41 | 64.21 | 61.63 | 59.55 | 58.43 | --- Rebuttal Comment 1.1: Comment: Thank you for the authors' response. After reading the rebuttal and the comments of the other reviewers, my concerns have been thoroughly addressed and thus I maintain towards the acceptance of this work.
null
null
null
null
null
null
Bootstrapping Vision-Language Learning with Decoupled Language Pre-training
Accept (spotlight)
Summary: This paper propose to add a new component, a P-Former, to the BLIP-2 vision-language pre-training framework. The P-Former is a sentence encoder that learns to project texts into the input space of a LLM. During vision-language pre-training, an additional alignment loss is applied between BLIP-2 Q-Former's visual feature and the P-Former's text feature, which improves the alignment between the Q-Former and the frozen LLM, leading to improved image-to-text generation performance. Strengths: - This paper explores an interesting direction to improve BLIP-2 pre-training with an additional text encoder. The text encoder serves as a surrogate of the LLM to help better aligning the Q-Former with a frozen LLM. This could open up future research opportunities. - The experiments show some good improvements w.r.t. the original BLIP-2 on image-to-text generation tasks, which verifies the improved alignment with the LLM. It is also good to see improvements with both OPT and FlanT5. Weaknesses: I enjoy the core idea of the paper. However, the paper still needs much improvement in multiple aspects as detailed below. 1. The motivation and problem formulation is a slight misrepresentation of the actual method. The actual role of the P-Former is to serve as a surrogate of the LLM to help the Q-Former better align with the frozen LLM in both stage-1 and stage-2 pre-training. 2. The proposed method has a significant conceptual difference from BLIP-2: the stage-1 pre-training is *not* LLM-agnostic. This means that the more expensive stage-1 pre-training needs to be performed for every LLM. The proposed method thus tradeoff the Q-Former's flexibility for better alignment with the LLM. This point needs to be highlighted. 3. The pre-training of the P-Former is rather complex, where the intuition of each training loss is not clear. There needs to be more discussion and ablation study on the multiple losses. 4. The effect of the P-Former alignment may or may not maintain given more pre-training data and longer pre-training epochs. This is hard to verify given limited compute, so I'm not expecting the authors to provide answers. 5. The writing could be improved. Personally, I feel many equations to be unnecessary because they add additional burdens to understand the method. Simple plain text would be good enough to describe the method clearly. The "forward-decoupled" and "backward-decoupled" concepts are also not intuitive to understand. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: - Is the P-Former finetuned during stage-1 and stage-2 pre-training? Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: The authors have addressed the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **W1: The motivation and problem formulation is a slight misrepresentation of the actual method. The actual role of the P-Former is to serve as a surrogate of the LLM to help the Q-Former better align with the frozen LLM in both stage-1 and stage-2 pre-training.** Re: The P-Former does indeed function as a surrogate for the LLM, assisting the Q-Former to align optimally with the frozen LLM during both stage-1 and stage-2 pre-training. The driving force behind our methodology, however, was to develop a decoupled training approach. Our primary intention was to create an optimal prompt, learned by the P-Former, to guide the VL-connector (e.g., Q-Former) effectively. **W2: The proposed method has a significant conceptual difference from BLIP-2: the stage-1 pre-training is not LLM-agnostic. This means that the more expensive stage-1 pre-training needs to be performed for every LLM. The proposed method thus tradeoff the Q-Former's flexibility for better alignment with the LLM. This point needs to be highlighted.** Re: You've correctly identified a departure in our approach from BLIP-2: our stage-1 pre-training is indeed not LLM-agnostic. This difference will be highlighted in our revised manuscript. However, as evidenced in the 3rd row of Table 4 (w1=0 and w2=100) from our ablation study, our approach achieves competitive results even without alignment loss in stage-1, focusing the alignment solely on stage-2. **W3: The pre-training of the P-Former is rather complex, where the intuition of each training loss is not clear. There needs to be more discussion and ablation study on the multiple losses.** Re: The contrastive loss is employed to enhance the semantic richness of the P-Former’s output, ensuring similar images/sentences yield similar prompts. Furthermore, our findings indicate that the inclusion of vocabulary loss (which can be viewed as a regularization term) slightly improves our results (without vocabulary loss: GQA:33.5, OKVQA: 29.5, VQA: 51.7. With vocabulary loss: GQA: 34.0, OKVQA: 30.0, VQA: 52.6). **W4: The effect of the P-Former alignment may or may not be maintained given more pre-training data and longer pre-training epochs. This is hard to verify given limited computation, so I'm not expecting the authors to provide answers.** Re: Conducting experiments with the 129M dataset presents challenges for us, given that we possess a max of 8 GPUs, while the original results reported by BLIP2 on the 129M dataset utilized 16 GPUs about 9 days. The primary goal of our method is to streamline the training process and make efficient use of the available training data. As such, we anticipate that our approach might show modest gains in a 129M setting, particularly if the model undergoes extensive training. In fact, a key motivation behind P-Former is to reduce the dependence on vast multi-modal datasets and models. This approach not only simplifies the training process but also democratizes participation, ensuring that research in this area isn't solely the domain of entities with access to significant computational resources. **W5: The writing could be improved. Personally, I feel many equations to be unnecessary because they add additional burdens to understanding the method. Simple plain text would be good enough to describe the method clearly. The "forward-decoupled" and "backward-decoupled" concepts are also not intuitive to understand.** Re: We are grateful for your feedback. We will revisit our manuscript to ensure greater clarity, potentially simplifying certain sections for enhanced comprehension. Complex concepts like “forward-decoupled” and “backward-decoupled” will be elaborated upon to provide a more intuitive understanding. **Q1: Is the P-Former fine-tuned during stage-1 and stage-2 pre-training?** A: The P-Former remains frozen during both stage-1 and stage-2. Learned during stage 0, the P-Former is designed to predict semantically rich soft prompts for individual instances during the subsequent stages. Its fixed state during these stages is pivotal for generating alignment losses, directing the outputs of the VL-connector (e.g., Q-Former in BLIP2). --- Rebuttal Comment 1.1: Comment: I appreciate the authors' response. I confirm my original score and recommend acceptance. --- Reply to Comment 1.1.1: Title: Thank you! Comment: Thank you! We appreciate the time and effort.
Summary: This paper introduces a vision-language pre-training method with the help of the proposed Prompt-Transformer. As the first step, the P-Former is optimized to learn the "optimal" soft prompt that can guide the LLM to generate the target texts. After that, the trained P-Former is frozen and used to train the visual adaptor of VLM. Strengths: 1. The method is novel in that it utilizes an intermediate P-Former to effectively optimize the VLM. 2. The method achieves impressive results in terms of the performance of training data efficiency. Weaknesses: 1. In Table 1, it is not fair to only list the numbers of image-text pairs considering that P-Former also consumes language data for training. Similarly, in the paragraph of Line 216, the overhead of training P-Former should also be taken into consideration. 2. After reading the paper, it is still a bit unclear why, in essence, P-Former is effective. Does it function as some type of knowledge distillation? Please provide more discussion about this. Technical Quality: 3 good Clarity: 3 good Questions for Authors: None Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 4 excellent Limitations: The authors discussed three limitations in the paper. I am mainly concerned about the last one that the method cannot handle cross-attention models. With the development of VLM, cross-attention or joint modeling could be a very important branch of methods. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **W1: In Table 1, it is not fair to only list the numbers of image-text pairs considering that P-Former also consumes language data for training. Similarly, in the paragraph of Line 216, the overhead of training P-Former should also be taken into consideration.** Re: We agree that it's crucial to present a more comprehensive view of data consumption, factoring in both image-text pairs and the language data leveraged in P-Former training. While our original rationale was centered around the ease of accessibility and abundance of unimodal language data relative to image-text pairs, we understand the necessity of providing a balanced representation. Additionally, it's essential to underscore that the overhead of training the P-Former, although present, is a one-time process for a given LLM. This means it does not necessitate repeated training for different tasks (e.g., image/video/audio or QA/caption) provided the same OPT-2.7b is utilized. In our revised manuscript, we will ensure that these points are clearly articulated, and the computation overhead section is enhanced. **W2: After reading the paper, it is still a bit unclear why, in essence, P-Former is effective. Does it function as some type of knowledge distillation? Please provide more discussion about this.** Re: - In our experiments with base-models like BLIP-2, the architecture consists of three sequential components: (1) ViT, (2) VL-connector, and (3) LLM decoder. Given that we use a frozen LLM for generation, optimizing closer to the LLM decoder becomes more pivotal for achieving optimal generation quality. - The unique design of P-Former mirrors a sentence embedding model, as evidenced in lines 158 to 163. This means the prompts predicted by the P-Former carry rich semantics. Therefore, during evaluations on unfamiliar images, the model boasts an improved generalization capability. - BLIP2's studies indicate that direct end-to-end optimization of the sequential model can sometimes lead to catastrophic forgetting. Our approach adds an additional layer of complexity by decomposing the two-stage BLIP2 training into three stages, further addressing this optimization challenge. - For BLIP2, optimization of soft prompt is learned only using text from image-text pair, while our decoupled training allows for leveraging additional unimodal data for optimizing these soft prompts We will include a discussion of this intuition in our revised manuscript. --- Rebuttal Comment 1.1: Comment: Thanks for the reply. My concerns are basically addressed.
Summary: This paper introduces a prompt-transformer (P-Former), which is pretrained on text corpus, that can trigger the LLM to generate better text prompts to align with the vision-and-language models with better visual features. Empirical experiments on the top of BLIP-2 shows promising results on several tasks. Strengths: 1. This paper introduces an separate stage for P-Former, to trigger LLM to generate text prompts to align with visual features, verifies the ideas on the BLIP-2, and shows promising results on several tasks. Weaknesses: 1. May need more comprehensive experiments to show where is the gain from. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: 1. Where is the gain from? I assume the stage 1 in the Figure 3 and Line 211, is just taken from the pretrained BLIP model, right? If not, do you take pretrained BLIP checkpoint for continuing pretrain 10 epochs? And do you have a comparison for the stage 1 pretraining, compared to the BLIP-2? The main issue here is, to try to make an alignment with BLIP-2 to see where the difference/improvement comes from? stage 1 or stage 2. 2. Is the P-Former is agnostic to the different Vision-encoders? if changing a vision encoder, does the framework still work? Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: See the questions part. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **W1 and Q1: Where is the gain from? I assume the stage 1 in the Figure 3 and Line 211, is just taken from the pretrained BLIP model, right?** Re: Our approach in stage 1 is grounded in equation (8) and comprises dual learning objectives: the first one originates from BLIP2, while the second alignment loss is introduced by our P-Former. We have not adopted pretrained BLIP2 weights; we only employ its objective functions. The only learnable parameters in equation (8) are those in Q-Former, which are randomly initialized. **If not, do you take pretrained BLIP checkpoint for continuing pretrain 10 epochs? And do you have a comparison for the stage 1 pretraining, compared to the BLIP-2? The main issue here is, to try to make an alignment with BLIP-2 to see where the difference/improvement comes from? stage 1 or stage 2.** Re: Regarding the checkpoint continuation and comparison with BLIP-2: We initiate our vision-to-language model, ViT-Qformer-OPT, with a pretrained ViT and OPT, whereas the Q-former remains randomly initialized. A preliminary "stage 0" serves to train the P-former, which is subsequently frozen. Our subsequent approach can be summarized thusly: - Stage 1: We train the ViT (which remains frozen) and the learnable Q-former using equation (8), with the Q-former being initialized randomly. - Stage 2: Here, we employ the Q-former from stage 1 and train the ViT-Qfromer-OPT tandem via equation (9).
 To succinctly delineate our method from BLIP2: Our stage 1 is represented by equation (8). Our stage 2 is represented by equation (9). By contrast, BLIP2's stage 1 corresponds to equation (8) but with w1 set to 0, and BLIP2's stage 2 matches equation (9) but with w2 set to 0. **Where does the difference/improvement come from? stage 1 or stage 2.** Re: By introducing w1 and w2 in equations (8) and (9) respectively, we've enabled an ablation to answer this question. As depicted in our ablation study (Table 4), we examine cases with w1=0 and w2=0. By setting w1 or w2 to zero, we effectively bypass the influence of the P-former, making the approach analogous to the original BLIP-2 training. The results indicate that the alignment loss introduced by our P-former contributes to both stage 1 and stage 2. **Q2: Is the P-Former agnostic to the different Vision-encoders? if changing a vision encoder, does the framework still work?** Re: Our framework is intrinsically adaptable to different encoders. This is illustrated in section 4.5 where we transition from the attention-driven ViT to a convolution-centric I3D model for video encoding. Notwithstanding this shift, our results (displayed in Table 7) remain very competitive on video-to-language generation. --- Rebuttal Comment 1.1: Comment: Thanks for the authors' response. May lend to recommend a weak acceptance, either 5 or 6.
Summary: To improve image-to-text generation, this paper proposes a proxy model P-Former to predict LLM prompts and uses it as an auxiliary loss in BLIP-2 to align selected features with LLM prompts. Results show promising results, especially in 0-shot VQA tasks. Strengths: Nice novelty by introducing a proxy model for LLM prompts prediction to enhance image-to-text generation. The experiments, ablations and analysis are comprehensive, showing improvements in both VQA (more significant) and captioning (less significant) tasks, and no improvement in image-text retrieval. Well written and easy to read. Weaknesses: This paper only shows the effect of the LLM prompt prediction in an incremental way, i.e., as an auxiliary loss in the existing BLIP-2. It would be more interesting if we could show the effectiveness of P-Former in a cleaner (simpler) setup. minor: The paper title says “Bootstrapping Vision-Language Learning”, which might be generic according to the results of this paper, such as no improvements on image-text retrieval tasks. Technical Quality: 3 good Clarity: 3 good Questions for Authors: As mentioned in “Weaknesses”, instead plugging P-Former in BLIP-2 as an auxiliary loss (IMO more like an incremental work), it might be more interesting to verify P-Former’s effect by directly predicting the LLM prompts in a captioning model? In line 216, also mention the cost to train P-Former? In line 196, elaborate the size of each dataset? minor comment: in equation 7, the parentheses seem incorrect? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: No specific concern. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **W1 and Q1: This paper only shows the effect of the LLM prompt prediction in an incremental way, i.e., as an auxiliary loss in the existing BLIP-2. It would be more interesting if we could show the effectiveness of P-Former in a cleaner (simpler) setup.** Re: A potential pitfall in bypassing the LM loss entirely is that errors originating at the prompt level might be amplified as they navigate through the LM decoder. Given the end goal is caption/text generation, the LM loss remains a pivotal factor for ensuring the proficiency of such a model. Consequently, it may be challenging to completely forgo the LM loss, especially when considering a language generative model. Further emphasizing the versatility of our method, it's adaptable across various modalities, including image, video, and audio. Moreover, it can be integrated into any VL model that employs prompts as interfaces. To illustrate this adaptability, we reference Section 4.5 where we highlighted the efficacy of our approach in video captioning, **leveraging a model distinct from BLIP2**. **Q2: In line 216, also mention the cost to train P-Former? In line 196, elaborate the size of each dataset? minor comment: in equation 7, the parentheses seem incorrect?** Re: These are thoughtful suggestions. We will make the necessary updates in our final version. Thank you! --- Rebuttal Comment 1.1: Comment: > Given the end goal is caption/text generation, the LM loss remains a pivotal factor for ensuring the proficiency of such a model. IIUC, this is done by the frozen LLM instead of Q-Former? Given the input of the LLM are tokens, can we train a clean generative model to predict these tokens (generated by the pre-trained P-Former). It's fine to reuse Q-Former's architecture and training recipe, but my concern is that why we couldn't use predicting P-Former tokens as the only objective (I assume such a generative model is not that large, unlike the frozen LLM, so maybe this is still doable)? --- Reply to Comment 1.1.1: Comment: Theoretically, it is feasible to train our generative model (ViT+Qformer), denoted as `Gen()`, with the singular objective of predicting the prompt `prompt_gt` generated by the P-Former: ``` prompt_gen = Gen(Image) L = (prompt_gen - prompt_gt) ** 2.0 ``` Nevertheless, driving this loss to an exact zero is an unattainable goal. Significantly, **any deviation in learning the prompt is amplified when the prompt is subsequently passed into the LLM** given that generation is our end objective: ``` caption_gen = LLM(prompt_gen) ``` Even a minor discrepancy in `prompt_gen` can disproportionately affect the accuracy of `caption_gen`, especially when processed through an intricate LLM. The suggestion by the reviewer to rely solely on the objective of prompt alignment loss `L = (prompt_gen - prompt_gt) ** 2.0` is theoretically valid. However, our empirical studies indicate that it fails to deliver the most desirable outcomes: Specifically, when testing the alignment loss in isolation (without the LM loss) on the VATEX dataset for video captioning, we observed scores of CIDEr: 35.1, BLEU-4 19.8, and ROUGE: 42.1. These results accentuate that our P-Former can adeptly guide vision encoders along favorable trajectories even in the absence of the LM loss. Nevertheless, these scores are inferior to those reported in Table 7. One primary reason is that relying only on the prompt alignment loss excludes both the forward and backward passes in the computationally intensive LLM during training. It's worth noting that the LLM represents over 70% of the parameters in the entire model of BLIP-2 ViT-g OPT2.7B (and surpasses 90% in BLIP-2 ViT-g FlanT5XXL). This not only pertains to a performance-tradeoff but also to the cost of training. While our model demonstrates promising results, it cannot replicate these results without the incorporation of the LM loss. Herein, our prompt alignment loss serves as an intermediary, aligning the Q-Former more cohesively with the frozen LLM. We concur with the sentiment that a learning framework that circumvents the LLM, yet maintains robust performance, would indeed be groundbreaking due to the substantial reductions in training duration and computational demands.
Rebuttal 1: Rebuttal: Dear Reviewers: We sincerely thank you for your thoughtful reviews and constructive feedback on our paper. We are heartened to see that our proposal of adding a new component, a P-Former, to the X-language (X being any modality) pre-training framework resonated well with all of you. Your appreciation of our exploratory direction, the improvements shown in our experiments, and the potential impact on future research is highly encouraging. Your insights have aided in refining our understanding, and we have addressed each of your comments individually. In addition to our detailed response, we've incorporated qualitative comparisons of our method and BLIP-2 for specific datasets, such as GQA and OKVQA (see attached pdf). We will include a section on qualitative analysis in the appendix of our revised paper. Pdf: /pdf/b3e5c733665fb9af187658c4d92a2a79fc947a41.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: The paper introduces a novel approach for optimizing the application of large language models in resource-intensive vision-language pre-training. Unlike the traditional approaches of using visual features as prompts to guide language models, the paper focuses on identifying optimal prompts to align with visual features. This is achieved through the introduction of the P-Former, a module trained only on linguistic data, eliminating the need for image-text pairings. Strengths: 1. Originality: The paper demonstrates a high level of originality in its approach to vision-language learning. By introducing the P-Former, the paper presents a unique perspective in contrast to the standard approach of using visual features as prompts. Also, the focus on a modality-agnostic framework expands the applicability to various VL tasks, further enhancing the originality of the paper. 2. Quality: The paper exhibits a good level of quality in various aspects. The methodology is well-formulated, and the introduction of P-Former is well-justified. Also, it provides sufficient details about the experiments and demonstrates the robustness and flexibility of the framework across different VL tasks. 3. Clarity: The paper is written with clarity and precision. The abstract, introduction, and related work sections provide a clear context and properly position the paper in the literature. The technical concepts are well-explained, including the P-Former and the decoupled language pre-training approach. The use of figures and tables further help in understanding the experiments and results. 4. Significance: The significance of the paper is in the decoupled language pre-training approach, with the P-Former's ability to predict ideal prompts, addressing resource-intensive challenges in this field. Also, it offers a fresh perspective on how to leverage large LMs in VL learning scenarios. Weaknesses: - The idea of the P-Former is interesting, however, it seems to lack an intuitive explanation and motivation. Why learning an ideal language prompt helps more, compared to using visual ones as in the counterpart models? - There seems to be a lack of some ablations, which naturally arise as questions, for example, experimenting with and without the P-Former module, maybe just by using randomly initialized prompt p, using different sizes/types of language backbones, different ways of initializing the prompt p at the beginning, etc. - Also, there is a lack of qualitative analysis of the experiments. I would recommend including and analyzing qualitative results in comparison to existing approaches. Presenting visual examples of the model's performance in both successful and failure cases can make the paper stronger. - Some experiments lack stronger interpretation. I would also encourage the authors to provide more interpretation of some results (e.g. tables 1, 2, 3), rather than describing the tables, to enable the readers to gain better insights into the behavior of the framework. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - Could the authors provide more details and clearer insights into the P-Former part of the framework and size? - Across all experiments, it can be seen that more data helps BLIP-2 to outperform the P-Former. I wonder if the P-Former will improve the best performance of BLIP-2 if the pre-training image-text data is larger (129M)? - In Table 3, it can be observed that the performance of the proposed model is lower when it comes to retrieval. Could you interpret these results and elaborate more on this limited performance? - What is the reasoning behind the need for a separate pre-training of the P-Former on language data? Why do additional unimodal sentences contribute to the performance in Table 6? - Since the framework is presented as modality agnostic, do you think it can handle language-only tasks, such as machine translation? If so, which parts of the framework would need to be adapted to handle this? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors have provided a sufficient discussion of the limitations of their work in section 5. Due to the usage of pre-trained backbones, I would also encourage the authors to discuss any potential negative societal impact of their work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **W1: Seems to lack an intuition on why learning an ideal language prompt helps?** Re: - Models like BLIP2 consist of three sequential components: (1) ViT, (2) VL-connector, and (3) LLM decoder. Given that we use a LLM for generation, optimizing closer to the LLM (i.e., prompts, comparing to visual features) becomes more pivotal for achieving optimal generation quality. - The design of P-Former mirrors a sentence embedding model (lines 158 to 163). This means the prompts predicted by the P-Former carry rich semantics. Therefore, the model boasts an improved generalization capability. - BLIP2's studies indicate direct end-to-end optimization can sometimes lead to catastrophic forgetting. Our approach decomposes the 2-stage training into 3 stages, further addressing this optimization challenge. - BLIP2's (implicit) refinement for soft prompts is exclusively achieved through the utilization of textual content from image-text pairs. In contrast, our decoupled training approach empowers us to harness supplementary unimodal data, facilitating the enhancement of these soft prompts in a more comprehensive manner. **W2: Lack some ablations, e.g., w/o the P-Former module, maybe randomly initialized prompt, different backbones, etc.** Re: That is a great question which highlights critical facets of the experimentation process. - *Random Initialization and Learning Without P-Former*: Our initial approach was, as you mentioned, to directly learn from a randomly initialized prompt p without incorporating the P-former. But, upon testing, we identified a significant challenge. For a smaller model variant like opt-2.7b, which possesses a hidden size of 2560, if we employ 32 tokens as soft prompts for an expansive dataset with 4M sentences, the resultant model would have to accommodate an overwhelming 327B parameters. This would not only have computational implications but also learning from such a vast parameter space can dilute the essential semantic connections between various sentences. - *P-Former's Efficiency in Parameterization*: The P-Former emerged as a solution to this parameter explosion problem. P-Former parameterizes the soft prompt $p$ using a semantically-rich model. This design ensures that the total number of parameters remains fixed at 110M. The major advantage here is scalability. Whether we're working with a dataset of 4M, 12M, or 129M, or any LM decoder's size, the P-Former guarantees consistent parameters, making the model more computationally efficient and preventing the loss of essential semantic relationships. Future iterations of our research will undoubtedly delve deeper into the areas you've highlighted, providing a more comprehensive understanding of the P-Former's capabilities. **W3: Lack of qualitative analysis** Re: We value the feedback. We've incorporated qualitative comparisons for datasets such as GQA and OKVQA, allowing us to offer more nuanced insights (in uploaded PDF). We have demonstrated several examples comparing our model's response with BLIP-2 and the ground truth (GT). We will include a section on qualitative analysis in the appendix of our revised paper. **W4: Provide more interpretation of some results, rather than describing the tables.** Re: Thank you for the suggestion. We agree that more interpretation of the results will make our paper stronger. We will update our results section to include such a discussion. **Q1: Provide more details and clearer insights into the P-Former** Re: This is similar to our response to W1 and W2. Please see our responses above. **Q2: if the P-Former will improve BLIP-2 in 129M dataset setting?** Re: Conducting experiments with the 129M dataset presents challenges for us, given that we possess a max of 8 GPUs, while the original results reported by BLIP2 on the 129M dataset utilized 16 GPUs about 9 days. The primary goal of our method is to streamline the training process and make efficient use of the available training data. As such, we anticipate that our approach might show modest gains in a 129M setting, particularly if the model undergoes extensive training. In fact, a key motivation behind P-Former is to reduce the dependence on vast multi-modal datasets and models. This approach not only simplifies the training process but also democratizes participation, ensuring that research in this area isn't solely the domain of entities with access to significant computational resources. **Q3: Elaborate more on this limited performance in retrieval?** Re: The retrieval tasks primarily rely on the ViT and Q-former. Since P-former is designed exclusively for language generation by the LLM, it doesn't influence the retrieval processes, especially tasks that rely heavily on the contrastive objective. Our results in Table 3 aim to show that introducing P-Former doesn't negatively affect existing functionalities. **Q4: reasoning for a separate pre-training of the PFormer on language? Why do additional sentences contribute?** Re: (1) We drew inspiration from BLIP2’s observations that a 2-stage training is easier to optimize than end-to-end. Our strategy introduces an additional pre-training stage for the P-former, and our results confirm its benefits. (2) Introducing additional unimodal sentences enhances the learning of equation 4. With a fixed language decoder $D$, additional text $t$ inputs facilitate the learning of a better P-former $E$. **Q5: Language-only tasks, e.g., machine translation?** Re: Absolutely, our framework can be adapted for language-only tasks, including machine translation. The key adjustments would involve simply substituting the ViT with a language encoder and swapping the VL connector for a language-to-language connector. While a few adjustments are necessary, translations have proven effective when utilizing an encoder coupled with cross-attention, as exemplified in *'Attention is All You Need'*. For tasks focused solely on language, cross-attention might be the more intuitive approach. --- Rebuttal Comment 1.1: Comment: Thanks for the response, it clarified all my concerns. I would recommend acceptance. --- Reply to Comment 1.1.1: Title: Thank you! Comment: We greatly appreciate the reviewer's confidence in our work.
null
null
null
null
null
null
3D Indoor Instance Segmentation in an Open-World
Accept (poster)
Summary: This work tackles the task of incremental object-discovery for 3D semantic instance segmentation. Unlike numerous concurrent works, it is not unsupervised but enables users (or oracle) to label objects that were identified as unknown in each iteration. The method is evaluated on ScanNet200, the paper proposes three new splits/tasks and measures three different metrics as defined in prior work. Strengths: The proposed approach has significant practical significance as it enables training on a labelled corpora of data and provides the option to users to label the identified unknown objects and then refine the trained model once training labels are provided by a user. Weaknesses: The manuscript is unclear on numerous occasions to the point that it is hard to understand the method (see questions below, these should be addressed in the rebuttal and incorporated in an updated version). In general, the paper might be clear for those that worked on this project (I assume for a significant amount of time) but the writing is challenging to understand by someone that reads about the project for the first time (unfortunately this makes it really difficult to write a meaningful review). Technical Quality: 2 fair Clarity: 1 poor Questions for Authors: Things that are not clear and should be improved: - Looking at the videos and experiments: There are two methods 3D-OWIS-PC-CT and 3D-OWIS - are both yours? Or is one the baseline that is not yours, and the other yours? Maybe you can clearly mark yours with “Ours”. - l.106-110: “the learned that updates itself” -- what does it mean? Is it correct that after one iteration of predictions by the model, the users labels the unknown classes, and then (in the next iteration) the model is trained again based in the existing and additionally provided user annotations? If this is the case, it is confusing to say “continuously improving itself (l.109)” since that somehow suggests that the model can improve without human intervention whereas here it seems to rely on human labels? I do not question the usefulness of user input after each iteration (i think it is very desirable), I simply do not understand what is happening in the end - the description should be improved. - Figure 3 / Sec 3.2 Split B - I do not understand the split - what does it mean “exploring indoor areas” / “accessing an indoor space”? Is there a robot walking around in a simulation (I guess not)? - l.132 - what is “the auto labeler”? The term is used here for the first time without explanation. The text then continues in l.136-143 explaining the drawbacks and proposed solutions. Since I don’t know what an auto-labeler is, these two paragraphs did not provide any information to me so I ignored them for writing this review. L. 144 “the auto labeller depicted in Fig.2” - it is not depicted in Fig2 - there is only a box with the name “auto labeled” - this does not provide any useful information to understand what is happening. - Eq. 2, what is a class prototype? Do we have a class for each semantic class i.e. max of 200 in ScanNet200, or does it refer to individual object instances? The text suggests that it refers to semantic classes. If that is the case, how do you separate different instances that have the same semantic class? - l.175 - what is 3D-OWIS? Is it the same as [20]? After reading l.172-177, they seem to be the same, but I could not find the name 3D-OWIS in [20]. Is 3D-OWIS maybe the name of the method proposed in this paper? The caption of Fig.6 it seems to become clear that 3D-OWIS is the name of the method proposed in this paper. Similarly, in line 244 what is -PC-CT? What are the contributions? What does PC CT stand for? The last line of the caption in Tab.3 indicates that PC is probability corrections dn CT is confidence threshold? [I have to play detective only to understand what is the name of your method, why do you do this to me? Imagine R2 - they will directly reject the paper :( ] - Why are the Tasks 1-2-3 introduced? To my understanding, it seems to be a redefinition of the existing head-common-tail classes. Why confuse the reader and introduce an alias for sth that is already clearly defined? Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 1 poor Contribution: 2 fair Limitations: No / not applicable Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the encouraging and insightful comments. Please find our responses to specific queries below. **Clarification on 3D-OWIS-PC-CT.** 3DOWIS-PC-CT is an extended Mask3D with predictive capabilities to encompass unknown classes. Unlike the original closed-set baseline Mask3D, which is confined to training on a predetermined set of classes and limited to predicting solely within those classes during inference, our novel approach leverages insights from ORE [1] to achieve open-world segmentation capabilities. We meticulously adapted and integrated these insights to 3D point cloud instance segmentation, where we employ the contrastive clustering on the final refined queries in Mask3D decoder to improve the objectness score estimation, which will help in increasing the quality of the generated unknown pseudo labels. An Auto-labeling module is used in 3DOWIS-PC-CT to select the predicted masks with the highest confidence (top-k) for instances that do not correspond to the ground truth of known classes. These predicted masks are then utilized as pseudo labels to represent the unknown class. **Continuous improvement.** In an open-world learning scenario, a model's progression is split into distinct tasks. In each task, the model is introduced to a specific group of labels that are known, while also being presented with numerous unfamiliar classes. In the first task, the model learns from the known examples and develops the ability to recognize unfamiliar classes within the scene. Moving on to the second task, the model builds on its knowledge from the known classes in the first task and is introduced to a fresh set of known classes. The concept of continuous improvement involves the model efficiently training on these solely new classes without causing its performance on the known classes from the first task to decline. A good model can retain the segmentation ability of old classes during training on new classes. This iterative process continues until the model reaches its maximum capacity in terms of the number of classes it can accommodate, determined by the classification head's size. **Split B motivation.** The perfect model for a robot moving indoors should segment both classes it knows and classes it hasn't seen before. Additionally, it should keep learning and getting better at segmenting new classes over time. In split B, we try to emulate how object classes might be sequentially labeled based on the scene types a robot first encounters. This grouping is good to assess how well models can segment objects in scenes that robots encounter while navigating in indoor areas. **Clarification on the auto-labeler.** A closed-set model's prediction only identifies classes it is familiar with, so all masks it predicts get labeled with one of those known classes. To create a substitute "pseudo-ground truth" for the unfamiliar classes, the auto-labeler picks predicted masks with the best objectness and zero overlaps with the known class's actual mask. These picked masks are marked as unknown and help teach the model about the unknown cases. **Clarification on the class prototype.** A class prototype is a summary feature that captures both the spatial and semantic aspects of a class. If, for example, there are 20 known classes, the model stores 21 of these summaries: one for each known class and an extra one for the unknown classes. These prototypes are estimated by keeping an exponential moving average of the features matched with the ground truth for the known classes and the pseudo-ground truth for the unknown classes. Note that in Mask3D 100 queries are refined and each query is used to generate a mask and a class, but only some of them correspond to a real ground truth. As a result, the 100 masks and labels generated from the queries are matched with the ground truth for the knowns and pseudo-ground truth for the unknowns using Hungarian matching, then the query feature is selected to represent its matched ground truth, and then stored to perform the class prototype estimation of the classes using exponential moving average. **Clarification on the 3D-OWIS and 3D-OWIS-PC-CT.** As discussed above, 3D-OWIS-PC-CT represents our refined adaptation of the initial closed-set baseline Mask3D. The goal here was to enhance its capabilities for an open-world environment. On the other hand 3D-OWIS is our final model, which emerges from two key changes: - Firstly, we replaced the selection of top-k-based unknown pseudo labels selection with Confidence Threshold (CT) -based selection. This change, which prioritizes quality over quantity, contributes to generating improved masks and enhancing performance on known classes. - Secondly, we integrated Probability Correction (PC), a technique aimed at rectifying instances that were initially misclassified as known classes. Our inspiration for PC came from analyzing t-SNE plots of the features of unknown classes, a comparison that revealed greater dispersion in our unknown class features compared to those of ORE [1] in the 2D open-world object detection (see **rebuttal pdf**). This observation shows the challenge of segmenting unknowns in a 3D setting as opposed to 2D. **Need for Tasks 1, 2, and 3.** A task refers to what set of classes the model has to learn. It is used to keep track of the progress of a model while learning new classes. for split A, which is based on class-frequency, it follows the head/common/tail splitting in ScanNet200. However, split B and C are distinct from this classification in the original dataset, where split B is determined based on regions and scene types and split C is completely random to simulate the randomness aspect of the open-world. [1] Joseph, K. J., et al. Towards open world object detection, CVPR 2021. --- Rebuttal Comment 1.1: Title: Thanks for the rebuttal! Comment: I thank the authors for the extensive rebuttal which provided insights to my remaining questions. An updated version can be improved by including those clarifications as well as the additional experiments.
Summary: This paper proposes a pipeline to do 3D open-world instance segmentation. The authors provide a problem definition and introduced three different scenarios. Moreover, to overcome the possible problems that may lead to lower performance on known classes, the authors proposes different modules like probability correction and exemplar replay. The comprehensive experiments show their proposed methods' effectiveness. Strengths: As claimed by the author, this paper is the first to investigate the 3D open-world instance segmentation task, and develop some well-motivated modules to improve the performance upon the simple baseline. Weaknesses: I'm not very familiar with the open-world related task and common practices both in 2D and 3D. Currently I think the writing of this paper is confusing and seems to be finished in a hurry. For example, the "3D-OWIS-PC-CT", I assume the "-PC-CT" is **without PC and CT**? And I couldn't find explanations about the "PC" and "CT". In addition, I think authors can also provide the results of their pipeline equipped with other 3D instance segmentation as a comparison. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: As mentioned before I'm not an expert in this field, I'll ask more questions in later reviewing process. Currently I have one concern, since the network won't be re-trained in test and the network structure is fixed, does it mean the total class number (including the known classes and the unknown classes) will have an upper bound? And will the performance of this feature-cluster-based new class discovery become lower as the new class number increases? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 2 fair Contribution: 3 good Limitations: No. The authors didn't address their limitations, and also the possible negative societal impact. I would suggest authors discuss some foundamental drawbacks of current open-world discovery settings. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the encouraging and insightful comments. Please find our responses to specific queries below. **Explanation about PC and CT.** 3D-OWIS-PC-CT represents our method without Probability Correction (PC) and without Confidence Threshold (CT). The final model 3D-OWIS includes all components, including PC and CT. Our 3D-OWIS replaces the selection of top-k-based unknown pseudo labels selection with Confidence Threshold (CT) -based selection. This change, which prioritizes quality over quantity, contributes to generating improved masks and enhancing performance on known classes. Further, we integrated Probability Correction (PC), a technique aimed at rectifying instances that were initially misclassified as known classes. Our inspiration for PC came from analyzing t-SNE plots of the features of unknown classes, a comparison that revealed greater dispersion in our unknown class features compared to those of ORE [1] in the 2D open-world object detection (see rebuttal pdf). This observation shows the challenge of segmenting unknowns in a 3D setting as opposed to 2D. **Comparison to other 3D instance segmentation.** We base our work on the recently introduced Mask3D (ICRA 2023), as it achieves state-of-the-art performance on the challenging ScanNet200 benchmark for the task of 3D indoor instance segmentation. Our approach on three carefully curated open-world splits achieves promising results compared to the recent Mask3D. As a potential future work, we aim to extend our open-world setting for outdoor 3D scenes and approaches. **Bound on the number of classes.** In our experiments, we set the maximum number of classes to 200, which includes all the classes provided by ScanNet200 benchmark. Nonetheless, increasing the limit of the number of classes would only add more classifiers at the last layer of Mask3D. These classifiers are only trained once the network is presented with ground-truth objects of corresponding classes, and do not affect the overall performance of previous tasks. Therefore, increasing the limit of the number of classes would yield similar results to the ones provided in the paper, but with additional memory requirements. | \# of classes | 200 | 1000 | 5000 | 10000 | 50000 | 100000 | | --------------- | ----- | ----- | ----- | ----- | ----- | ------ | | Size of 3D-OWIS | 39.7M | 39.8M | 40.7M | 41.9M | 50.9M | 62.2M | **Discussion of limitations.** We will add the limitations and societal impact in the revised version. Similar to our base architecture Mask3D, we assume that complete reconstructed scenes are used as input, so the learning benefits from contextual information. [1] Joseph, K. J., et al. Towards open world object detection, CVPR 2021. --- Rebuttal Comment 1.1: Comment: After reading the authors' response, I think most of my concerns are solved. As I'm not an expert in this field, I have also read other reviews and think authors provide useful feedbacks towards them. I will keep my ratings as borderline accept or weak accept, and would love to see other reviewers' post-rebuttal comments to make final decisions.
Summary: This paper introduces an open-world 3D indoor instance segmentation method, where an auto-labeling scheme is employed to produce pseudo-labels during training and induce separation to separate known and unknown category labels. Pseudo label quality are further improved subsequently. Strengths: 1. Open world 3d instance segmentation is an important problem and the author first addresses this problem. 2. The method designing is reasonable and complete. 3. The experiment is also sufficient. Weaknesses: 1. Some designs are hard to understand. For example, for probability correction, why unknown object has to be far from the nearest known class? since we have no prior about category distributions. In experiments, you can also see that PC does not bring improvement to all metrics. This should be illustrated more clearly. 2. Although such open world task has not been studied in 3d tasks, it has been widely studied in 2d tasks. Since the core issue of this problem is actually the same, the author should implement some 2d open world methods in 3d instance segmentation and compare with them. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Please refer to the weakness section. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: The author does not discuss the limitation. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the suggestions to improve the clarity of the paper. **On probability correction assuming unknowns are far from knowns.** In our open-world 3D instance segmentation framework, we use the auto-labeler to generate pseudo-labels for the unknowns. The contrastive clustering step then minimizes the distance between the queries and the corresponding class prototype and maximizes the distance between the other class prototypes. This includes queries corresponding to the pseudo-labels that are pulled toward the unknown class prototype. Since the class prototypes are pushed away from each other, it is expected that the unknowns are also pushed away from the known class prototypes. We thank the reviewer and will add this clarification in the revised version. **Comparison to 2D baselines.** We empirically observe that a direct integration of different strategies from 2D provides sub-optimal results with respect to the trade-off between unknown recall and average AP for the proposed open-world 3D indoor instance segmentation setting. To this end, we conduct experiments by leveraging contrastive cluster (ORE [4]) or attention-driven pseudo-labeling (OW-DETR [3]) from recent 2D detection works. These experiments and their results are presented below: - OW-DETR [3] has introduced an objectness head to predict the instances' objectness in an input image, which significantly improves the performance of the 2D model for both known and unknown classes. However, our experiments revealed that learning the objectness of 3D features enhances the prediction recall for unknown classes (similar to 2D models) but adversely affects the performance of the 3D model on known classes (Ours with learnable objectness as in OW-DETR: 33.35 mAP vs: **Ours: 39.70** mAP). - Given the enriched semantic and 3D spatial information in the refined queries of the transformer decoder in Mask3D, we pursued a contrastive clustering approach to enhance the distinctiveness between queries of different classes. This facilitates a more accurate estimation of the objectness for predicted instances. To cater to the 3D nature of the problem, we tailored the clustering technique used in ORE [4] for OWOD. Unlike OWOD which clusters intermediate features in the detector, we choose to cluster the queries that encapsulate the richer spatial and semantic information extracted from the 3D transformer decoder rather than features from the output of the 3D backbone. Moreover, to gain more insight into the behavior of unknown classes in the 3D setting, we visualize the t-SNE plots of class features used for clustering in ORE [4] for 2D images and in our 3D-OWIS for 3D point clouds. The t-SNE comparison is presented in the **rebuttal pdf**. The t-SNE features in ORE [4] are extracted from the detector where Pascal VOC classes are known and all classes from MS-COCO are grouped together as unknown. In the case of our 3D-OWIS for 3D point clouds, we use the final refined queries in Mask3D decoder, which are used for predicting the masks and the class labels. In our 3D setting, we are showing split A task 1. The results in the figure show more sparsity in the 3D features of the unknown classes which makes them harder to segment compared to their 2D counterparts which are easier to cluster. To cope with this challenge in 3D point cloud unknown segmentation, we propose to use the known class query prototypes to correct the features of the unknowns in the boundaries of the clusters of the known classes. Therefore, we propose to tailor our method to these observations in the 3D domain and aim to provide an optimal trade-off between unknown recall and known AP. The proposed scheme provides superior performance compared to 2D strategies as shown in the table below. Further, we also present a comparison with two other existing works OLN [1] and GGN [2] in the table below. For OLN [1] implementation, we remove the classification head. In the case of GGN [2], we train an affinity predictor with Minkowski 3D backbone. We hypothesize that OLN failed in 3D point cloud instance segmentation because it wasn't able to learn a good objectness representation from masks only given the sparsity of point clouds. Meanwhile, GGN [2] pseudo-labels are of lower quality since the affinity is not optimal for 3D because of the empty space and the disconnected object parts (qualitative results in **rebuttal pdf** show GGN pseudo labels). | | | WI ↓ | A_OSE ↓| U-Recall ↑| mAP(known) ↑ | mAP ↑ | |-------------------|-----------|:---------:|:-----------:|:--------------:|:----------------------:|:----------:| | Closed-set | Oracle | 0.129 | 227 | 55.94 | 38.75 | 38.6 | | | Mask3D | \- | \- | \- | 39.12 | 39.12 | ||||||| | Open World | OW-DETR [3] | 0.547 | 721 | 22.14 | 35.56 | 35.05 | | | GGN [2] | 15.68 | 1452 | 21.33 | 20.51 | 20.12 | | | OLN [1] | \- | \- | 02.45 | \- | \- | | | **3D-OWIS (Ours)** | 0.397 | 607 | 34.75 | 40.2 | 39.7 | **Discussion of limitations.** We will add the limitations and societal impact in the revised version. Similar to our base architecture Mask3D, we assume that complete reconstructed scenes are used as input, so the learning benefits from contextual information. [1] Kim et al., Learning Open-World Object Proposals without Learning to Classify, ICRA 2022 [2] Wang et al., Open-World Instance Segmentation: Exploiting Pseudo Ground Truth From Learned Pairwise Affinity, CVPR 2022 [3] Gupta, Akshita, et al. Ow-detr: Open-world detection transformer, CVPR 2022. [4] Joseph, K. J., et al. Towards open world object detection, CVPR 2021. --- Rebuttal Comment 1.1: Title: Thanks for the rebuttal Comment: Thank you for your rebuttal. Most of my concerns are addressed, and I will keep my score.
Summary: The paper presents a new application of open-world object detection to the setting of 3D instance segmentation. In this setup, a 3D point-cloud segmentation model is required to label each point with instance information, whether this instance is part of training or not. The paper uses ScanNet and proposes a few ways to partition the dataset to use for evaluation in open-world. Finally, the paper adopts the framework in [16] as a baseline for this problem and proposes two tricks on top this framework: PC (probability correction) and CT (confidence thresholding). Strengths: The task of open-world segmentation in 3D is important yet overlooked by previous work. As outlined by the paper, prior works mainly focus on 2D setups, either image or video. The use of ScanNet for evaluation seems appropriate and the splitting mechanism also seems reasonable, especially the Region-Based split has a very nice motivation from real-world application. Weaknesses: Post rebuttal comments: 1. The author provided more insights towards the difference between 2D and 3D. I was hoping for more attributes unique to 3D. But having some insights as least improve the quality of the work. 2. My second concern is addressed. Original review: Although there are many things to like about the proposal of the task, this work may suffer from a few important weaknesses: - Missing insights specifically to 3D. The major contribution of the paper is to adopt the recent new task of open-world localization to 3D setting. Unfortunately, the paper does not focus enough on aspects unique to 3D. Most components are the same as the 2D task and insights provided by the paper do not contain enough 3D-specific information. To justify the contribution, it is important for the paper to provides both intuitions and empirical proves on why it is worth studying 3D and how it is different than simple adoption of a 2D framework. In fact, the 3D-OWIS seems to be almost identical to [16] besides swapping the backbone predictor with a 3D backbone. The claimed contributions PC and CT are also not specific to 3D or reveals anything special about 3D. - Lack of comprehensive evaluation/ ablations. If author would prefer to claim more of their contributions on the dataset part, it is important to benchmark the task comprehensively. A suite of recent baselines in 2D should be evaluated. For example, recent works OLN [A] and GGN [B] both provide good class-agnostic proposals, and their 3D variants should be evaluated. In particular, GGN also uses a self-teaching schema. Minor: - The presentation of baseline 2D-OWIS-PC-CT seems really vague when first-time introduced. Better elaborate on what this is when it is discussed the first time. - If author prefers to claim contributions mainly to the task/ benchmark, NeurIPS D&B is more appropriate. [A] Kim et al., Learning Open-World Object Proposals without Learning to Classify, ICRA 2022 [B] Wang et al., Open-World Instance Segmentation: Exploiting Pseudo Ground Truth From Learned Pairwise Affinity, CVPR 2022 Technical Quality: 3 good Clarity: 2 fair Questions for Authors: NA Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: Limitation is not sufficiently discussed. For example, when repurposing ScanNet200, what are some potential limitation? This dataset isn't designed specifically for this proposed task in open-world. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate the reviewer's valuable feedback. Detailed answers to the reviewer's queries are provided below. **3D-specific insights.** As recommended, we present here both empirical and qualitative results to provide more insights specific to our 3D setting. We empirically observe that a direct integration of different strategies from 2D provides sub-optimal results with respect to the trade-off between unknown recall and average AP for the proposed open-world 3D indoor instance segmentation setting. To this end, we conduct experiments by leveraging contrastive cluster (ORE [16]) or attention-driven pseudo-labeling (OW-DETR [C]) from recent 2D detection works. These experiments and their results are presented below: OW-DETR [C] has introduced an objectness head to predict the instances' objectness in an input image, which significantly improves the performance of the 2D model for both known and unknown classes. However, our experiments revealed that learning the objectness of 3D features enhances the prediction recall for unknown classes (similar to 2D models) but adversely affects the performance of the 3D model on known classes (Ours with learnable objectness as in OW-DETR: 33.35 mAP vs: **Ours: 39.70** mAP). Given the enriched semantic and 3D spatial information in the refined queries of the transformer decoder in Mask3D, we pursued a contrastive clustering approach to enhance the distinctiveness between queries of different classes. This facilitates a more accurate estimation of the objectness for predicted instances. To cater to the 3D nature of the problem, we tailored the clustering technique used in ORE [16] for OWOD. Unlike OWOD in [16] which clusters intermediate features in the detector, we propose to cluster the queries that encapsulate the richer spatial and semantic information extracted from the 3D transformer decoder rather than features from the output of the 3D backbone. Moreover, to gain more insight into the behavior of unknown classes in the 3D setting, we visualize the t-SNE plots of class features used for clustering in ORE [16] for 2D images and in our 3D-OWIS for 3D point clouds. The t-SNE comparison is presented in the **rebuttal pdf**. The t-SNE features in ORE [16] are extracted from the detector where Pascal VOC classes are known and all classes from MS-COCO are grouped together as unknown. In the case of our 3D-OWIS for 3D point clouds, we use the final refined queries in Mask3D decoder. In our 3D setting, we are showing split A task 1. The t-SNE results show more sparsity in the 3D features of the unknown classes which makes them harder to segment, compared to their 2D counterparts which are easier to cluster. To cope with this challenge in 3D point cloud unknown segmentation, we propose to use the known class query prototypes to correct the features of the unknowns in the boundaries of the clusters of the known classes. The proposed scheme provides superior performance compared to both 2D strategies as shown in the table below. **Comparison to OLN [A] and GGN [B].** As recommended, we present the comparison with OLN [A] and GGN [B] in the table below. For OLN [A], we remove the classification head from Mask3D, while we keep the mask loss during training. In the case of GGN [B], we train an affinity predictor with Minkowski as a 3D backbone and a prediction head for the affinity. Affinity maps are generated by the affinity predictor, and fed to the Connected Components module to generate class-agnostic mask proposals. We hypothesize that OLN failed in 3D point cloud instance segmentation because it wasn't able to learn a good objectness representation from masks only given the sparsity of point clouds. Meanwhile, GGN [B] pseudo-labels are of lower quality since the affinity is not optimal for 3D because of the empty space and the disconnected object parts (qualitative results in **rebuttal pdf** show GGN pseudo labels). | | | WI ↓ | A_OSE ↓| U-Recall ↑| mAP(known) ↑ | mAP ↑ | |-------------------|-----------|:---------:|:-----------:|:--------------:|:----------------------:|:----------:| | Closed-set | Oracle | 0.129 | 227 | 55.94 | 38.75 | 38.6 | | | Mask3D | \- | \- | \- | 39.12 | 39.12 | ||||||| | Open World | OW-DETR [C] | 0.547 | 721 | 22.14 | 35.56 | 35.05 | | | GGN [B] | 15.68 | 1452 | 21.33 | 20.51 | 20.12 | | | OLN [A] | \- | \- | 02.45 | \- | \- | | | **3D-OWIS (Ours)** | 0.397 | 607 | 34.75 | 40.2 | 39.7 | **Clarification on 3D-OWIS-PC-CT.** We thank the reviewer for the suggestion. 3D-OWIS-PC-CT represents our method without Probability Correction (PC) and without Confidence Threshold (CT). The final model 3D-OWIS includes all components, including PC and CT. **Discussion of limitations.** We note that ScanNet200 benchmark is the largest 3D indoor instance segmentation dataset in terms of the number of classes. Due to the highly challenging nature, diversity, and number of classes, we adapt it to open-world setting. We also introduce carefully curated open-world splits leveraging realistic scenarios based on inherent object distribution, region-based indoor scene exploration, and the randomness aspect of open-world classes. As noted by the reviewer, our region-based split is well motivated from the perspective of real-world application. In future, we aim to further explore open-world setting for outdoor 3D scenes. [C] Gupta, Akshita, et al. Ow-detr: Open-world detection transformer, CVPR 2022. --- Rebuttal Comment 1.1: Title: Thanks for your response Comment: After reading the response from authors as well as reviews from other reviewers, I think my initial concerns were shared among other reviewers. Given the response, I think my second concern regarding lack of studies adopting 2D detectors is resolved. It seems non-trivial to extend them to 3D setting, and an naive extension with GGN fails due to the reasons explained by the authors. The author gave some analysis regarding 3D features being more scattered and difficult, and I think this insight is useful. However, I think what (other reviewers and) I are looking for, is something unique to 3D, such as geometry. For example, the failed adoption of OLN due to sparsity of point-cloud is quite interesting. Some other angles relating to 3D geometry might provide extra insights to strengthen the paper. For example, is 3D shape harder to generalize in open-world than 2D? Or maybe shapes are less prone to overfitting in closed-world scenario. That being said, I think the author did a good job responding to the concern regarding adopting other 2D detectors. I am inclined to raise my rating to borderline accept or weak accept. Would love to hear from other reviewers' post-rebuttal comments.
Rebuttal 1: Rebuttal: We thank all the reviewers (PXM7, CNBM, WNwG, L857, HHFW) for the positive and valuable feedback, and we appreciate the comments to improve our work. **Reviewer PXM7:** "Idea is easy to follow and the presentation is overall clear. Illustration figures are clear. The corresponding modifications tailored to this new problem are reasonable". **Reviewer CNBM:** "open-world segmentation in 3D is important yet overlooked. Region-Based split has a very nice motivation from real-world application". **Reviewer WNwG:** "The method designing is reasonable and complete. The experiment is also sufficient.". **Reviewer L857:** "develop some well-motivated modules to improve the performance". **Reviewer HHFW:** "The proposed approach has significant practical significance". We summarize the main points presented in our response: - We include comparisons to other strategies tackling 2D open-world. - We present more insights on the 3D challenges and motivations to our proposed method. - We provide clarifications and answers to the points raised, which we will include in our revised version. Pdf: /pdf/83d90dc242e595aed9fe0404b3c674e9b16efc2e.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: This paper addresses the challenge of 3D instance segmentation in open-world scenarios. It starts with a formulation for this problem, including the definition and setup of known and unknown objects and different splits of categories for simulating different open-world cases. Accordingly, this work proposes a framework incorporating an unknown object identifier to detect objects not present in the training set and devises several strategies to enhance the separation of classes within the query embedding space, such as contrastive clustering and reachability-based probability correction. Experiments on the new benchmark validate the effectiveness of the proposed framework. Strengths: - This paper studies a new problem setting, open-world 3D instance segmentation, with a new benchmark and framework. - The basic idea is easy to follow and the presentation is overall clear. - The illustration figures are also clear such as Fig. 2-4. - The new benchmark considers different open-world cases when splitting different categories. - The corresponding modifications tailored to this new problem are reasonable, from contrastive clustering for queries to reachability-based probability correction and alleviating catastrophic forgetting for incremental learning. - Experiments show the effectiveness of the proposed framework. Different evaluation metrics also show different perspectives for comparing different methods. Weaknesses: - There is no preliminary section for the baseline framework, Mask3D, making the introduction of corresponding modifications tailored to this new setting a little confusing. In my opinion, it would be much better to introduce the baseline framework after setting up the benchmark after Sec. 3.2 and even open a new section for the "approach" part. - Some of the modifications are adapted from experiences in the 2D domain, and they are not revised to fit the 3D problem or tackle some specific challenges in 3D. Although some of them such as reachability-based probability correction have their own insight, it is still unclear whether the underlying reason for these challenges is related to the different data modality. It would be better to have deeper thinking from this aspect. - The evaluation benchmark involves comprehensive metrics for the new setup, but it lacks an intuitive connection or comparison between the new problem and other conventional ones, such as 2D open-world instance segmentation/detection and 3D close-set instance segmentation. Given this is a relatively new problem, it would be better to compare it with other well-explored problems to make it more friendly and acceptable for researchers from this community. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: None. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: The author does not discuss the limitations and potential social impacts in the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the encouraging and insightful comments. Please find our responses to specific queries below. **Preliminary section after Sec 3.2.** We thank the reviewer for this suggestion. As suggested by the reviewer, we will add a new section to describe our closed-set baseline “Mask3D” in the revised version. **More insights on the underlying 3D challenges.** The open-world learning problem requires detecting unknowns (unknown recall) along with accurately detecting class-specific knowns (average AP). While some of the proposed adaptations are inspired from the 2D domain, we note that a direct integration of these strategies from 2D provides sub-optimal results with respect to the trade-off between unknown recall and average AP for open-world 3D indoor instance segmentation. To gain further insights, we conduct experiments by leveraging contrastive cluster (ORE) [4] or attention-driven pseudo-labeling (OW-DETR) [3] from recent 2D detection works. These experiments and their results are presented below: - OW-DETR [3] has introduced an objectness head to predict the instances' objectness in an input image, which significantly improves the performance of the 2D model for both known and unknown classes. However, our experiments revealed that learning the objectness of 3D features enhances the prediction recall for unknown classes (similar to 2D models) but adversely affects the performance of the 3D model on known classes (Ours with learnable objectness as in OW-DETR: 33.35 mAP vs: **Ours: 39.70** mAP). - Given the enriched semantic and 3D spatial information in the refined queries of the transformer decoder in Mask3D, we pursued a contrastive clustering approach to enhance the distinctiveness between queries of different classes. This facilitates a more accurate estimation of the objectness for predicted instances. To cater to the 3D nature of the problem, we tailored the clustering technique used in ORE [4] for OWOD. Unlike OWOD in [4] which clusters intermediate features in the detector, we propose to cluster the queries that encapsulate the richer spatial and semantic information extracted from the 3D transformer decoder rather than features from the output of the 3D backbone. Moreover, we observed from the t-SNE plots of the known and unknown features (**provided in the rebuttal pdf**) that the features of the unknowns are more dispersed in 3D compared to 2D, and are thus harder to segment. Therefore, we propose to tailor our method to these observations in the 3D domain and aim to provide an optimal trade-off between unknown recall and known AP. The proposed scheme provides superior performance compared to 2D strategies as shown in the table below. **Comparison to 2D open-world instance segmentation/detection and 3D closed-set instance segmentation**. As recommended by the reviewer, we provide the following results after implementing techniques proposed for 2D open-world instance segmentation/detection. We also present a comparison between the results of Mask3D and Oracle (3D-OWIS with access to the training dataset of the previously known classes and the labels of the unknown classes). We show results for Split A task1, as we note that GGN [2] and OLN [1] focus on segmenting the unknowns and do not target the incrementally learned tasks. On the other hand, our method aims at both segmenting the unknowns in the current task, as well as incrementally learning classes that are introduced at each task. Our approach performs favorably compared to [1, 2, 3]. | | | WI ↓ | A_OSE ↓| U-Recall ↑| mAP(known) ↑ | mAP ↑ | |-------------------|-----------|:---------:|:-----------:|:--------------:|:----------------------:|:----------:| | Closed-set | Oracle | 0.129 | 227 | 55.94 | 38.75 | 38.6 | | | Mask3D | \- | \- | \- | 39.12 | 39.12 | ||||||| | Open World | OW-DETR [3] | 0.547 | 721 | 22.14 | 35.56 | 35.05 | | | GGN [2] | 15.68 | 1452 | 21.33 | 20.51 | 20.12 | | | OLN [1] | \- | \- | 02.45 | \- | \- | | | **3D-OWIS (Ours)** | 0.397 | 607 | 34.75 | 40.2 | 39.7 | **On limitations**. We will add the limitations and societal impact in the revised version. Similar to our base architecture Mask3D, we assume that complete reconstructed scenes are used as input, so the learning benefits from contextual information. [1] Kim et al., Learning Open-World Object Proposals without Learning to Classify, ICRA 2022 [2] Wang et al., Open-World Instance Segmentation: Exploiting Pseudo Ground Truth From Learned Pairwise Affinity, CVPR 2022 [3] Gupta, Akshita, et al. Ow-detr: Open-world detection transformer, CVPR 2022. [4] Joseph, K. J., et al. Towards open world object detection, CVPR 2021. --- Rebuttal Comment 1.1: Title: Response to Rebuttal Comment: I acknowledge that I have read the authors' rebuttal and the other reviews. Thank you for addressing my concerns. I think most of my concerns are addressed to some extent and will keep my score in the final decision. I strongly recommend the author carefully consider the reviewers' comments and revise the paper as promised in the rebuttal.
null
null
null
null
null
null
DAC-DETR: Divide the Attention Layers and Conquer
Accept (poster)
Summary: This paper introduces DAC-DETR, a method to improve the training efficacy of DEtection Transformers (DETR) by addressing the contrary impacts of cross-attention and self-attention layers in the DETR decoder. DAC-DETR divides the cross-attention layers into an auxiliary decoder, which focuses on learning the cross-attention, while the original decoder handles non-duplicate detection. By employing one-to-many label assignment in the auxiliary decoder, DAC-DETR effectively improves the gathering effect of queries, leading to improved detection accuracy compared to popular DETR baselines. Strengths: - The paper is well-organized, presenting its ideas and findings in a clear and coherent manner. - The motivation is clear. The proposed methods are straightforward and logical, making them easy to understand and implement. - The effectiveness of the proposed methods is well-supported by a comprehensive set of experiments. The experimental results demonstrate significant improvements over existing DETR models. Weaknesses: I have no severe concerns about the paper. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: The reviewer is curious about the performance of the one-to-many branch (C-decoder). Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: NA Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **VXuS-Q1: The reviewer is curious about the performance of the one-to-many branch (C-decoder).** **[Ans]:** Thanks for this question. C-Decoder (with NMS) achieves slightly lower accuracy than the O-Decoder. Since C-Decoder has no self-attention layers and makes duplicate detection, NMS is prerequisite for filtering out the duplicate predictions. Based on the Deformable-DETR baseline (43.7 AP), using C-Decoder (+NMS) achieves 46.9 AP, which is -0.2 lower than using O-Decoder (47.1 AP). Moreover, we notice a related question (raised by review **LpXx-Q3**) that might draw your interest as well: combining the predictions from C-Decoder and O-Decoder may further bring incremental improvement. For example, on the above setting, combining the C-Decoder (46.9 AP) and O-Decoder (47.1 AP) achieves 47.4 AP. Please kindly refer to **LpXx-Q3** for more details.
Summary: This paper observes the problems in cross-attention and self-attention that impacts the queries, and proposes to use divide-and-conquer to improve the training accuracy. Strengths: 1. The paper is considered novel to me. The insights in analyzing the cross-attention and self-attention and the proposed design are interesting. 2. Strong performance. The final performance is improved compared with SOTA detectors. Weaknesses: 1. The design seems a little complex. Not sure if this design can be plugged into other DETR-like model easily. Technical Quality: 3 good Clarity: 3 good Questions for Authors: No Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: No Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **QUSJ-Q1: The design seems a little complex. Not sure if this design can be plugged into other DETR-like model easily.** **[Ans]:** Thanks for this question. Plugging our method into the DETR-like models is easy. Given a DETR-like model, we only need to append a C-Decoder with two steps: **1)** replicating its original decoder and **2)** removing the self-attention layers. The C-Decoder is supervised with the one-to-many matching, which may seem a bit complex but is reliable (DAC-DETR has considerable robustness on the one-to-many hyper-parameters, as shown in Fig. 4 in the manuscript). In our experiments, we already plug our DAC-DETR into multiple popular baselines, *i.e.*, Deformable-DETR, Deformable-DETR++, DINO, and achieve consistent improvement. We show our method is compatible to several most recent good practices, *e.g.*, Aligned loss and stable matching (during rebuttal), as well. In all these experiments, we use the same hyper-parameters for DAC-DETR. --- Rebuttal Comment 1.1: Title: Thanks for the rebuttal. Comment: Thanks for the rebuttal. I also notice that the final best performance is achieved by '+align', which is adopted from another paper Align DETR. Could the proposed method improve the final performance or just accelerate convergence? From the table, it is still unclear. --- Reply to Comment 1.1.1: Title: Response to Reviewer QUSJ Comment: Thanks. After your kind reminder, we find that our comparison indeed needs some re-organization and highlight to be more clear. DAC-DETR's benefits include both faster convergence and higher final accuracy. An supporting observation is in Table 1 in the manuscript. Specifically, based the DINO (ResNet-50) baseline, DAC-DETR (**NO** align loss) brings +1.0 (49.0 $\rightarrow$ 50.0), +0.8 (50.4 $\rightarrow$ 51.2) and +0.6 (50.9 $\rightarrow$ 51.5) under 12epochs, 24 epochs, and 36 epochs, respectively. The results under 12 and 24 epochs are already provided in Table 1 in the manuscript, and we will add the 36-epoch results into Table 1. It shows that: * The improvement under short training schedule is larger (+1.0 AP), indicating faster convergence. * The non-trivial improvement (+0.6AP) still holds when the training schedule is long (*i.e.*,36 epochs). Moreover, if we consider Align DETR as another baseline, the non-trivial improvements still hold. For example, using the Swin-L backbone, DAC-DETR improves Align DETR by +0.7 (57.4 $\rightarrow$ 58.1) and +1.0 (58.2 $\rightarrow$ 59.2) under 12 and 24 epochs, respectively. The baseline results of Align DETR are not listed in the main text and are provided in the Appendix (Table A3). We will add the comparison to Table 2 in the main text.
Summary: The authors find that the cross-attention and self-attention in the DETR decoder have opposite effects on object queries. This phenomenon reduces the training efficiency of DETR model. To resolve the contradiction, this paper proposes a Divide-And-Conquer DETR that employs an auxiliary decoder that shares parameters with the original decoder. Experimental results show that DAC-DETR achieves significant performance over popular DETRs. Strengths: The paper adequately cites relevant work and clearly articulates the differences from existing work. The experiments show promising results. And other researchers may build a network structure based on this paper. Weaknesses: * *Originality**: This article is a combination of a series of existing methods and is not innovative enough. * *Quality*:The motivation is straightforward, but the authors should give more analysis about contradiction. DAC-DETR brings improvement over baseline, but this does not indicate that the problem has been solved by theoretical analysis or experimental results. * *Clarity*:This paper is not well organized and has some grammar errors. * *Significance*:The model architecture is not adequately simple, which will affect its usability. From the experimental results, the authors proposed DAC-DETR to solve the conflict problem of cross-attention and self-attention but did not fundamentally innovate the structure to solve the problem. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: 1.In Figure 1 (b,c), which module is used to obtain the classification score and bounding box of the object? Why "cls" of objects in Figure 1(b) is so different from "cls" scores in Figure 1(b). Does the same phenomenon occur at the last layer of the DETR decoder and occur at the DINO? 2. This paper proposes DAC-DETR to resolve the contradiction between cross-attention and self-attention. Whether this problem in Figure1 is solved by the corresponding method, the authors should give more visualization analysis like Figure1, used DAC or not. 3. In Table 3. why the AP of varaiant2 and varaiant3 is reduced? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: 1. The description of the one-to-many label for assignment In section 3.2 is a bit too abstract. This is a key component of the proposed method, from Table 4, and the major difference with the existing method. 2. The authors proposed DAC-DETR to solve the confliction problem of cross-attention and self-attention, but innovative structures are not proposed to fundamentally solve the problem. 3. Compared with one-to-many Hungarian Loss in Table 4, the increase of AP is due to matching scores introducing an IoU score, which is also compatible with the original DETR. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **CkgD-Q1: This article is a combination of a series of existing methods and is not innovative enough.** **[Ans]:** We respectfully disagree with this point. We would like to highlight our major contributions as below: 1) We reveal a characteristic of DETR, *i.e.*, the cross-attention and self-attention layers in DETR decoder have opposing effects of "gather $\leftrightarrow$ disperse". This characteristic has not been noticed before and provides a new insight for understanding the training difficulty of DETR. 2) Based on this insight, we propose to separate the cross-attention out from the contradiction. This objective is implemented with a simple design, *i.e.*, adding a C-Decoder that replicates the original decoder but removes all the self-attention layers. 3) We show that DAC-DETR brings consistent improvements to popular DETR-like methods. For example, under 12 epochs learning scheme, DAC-DETR improves Deformable-DETR, Deformable-DETR++ and DINO by +3.4, +2.3 and +1.0 AP, respectively. * * * **CkgD-Q2: The motivation is straightforward, but there should be analysis (in addition to the accuracy improvement) to show the problem has been solved.** **[Ans]:** Thanks. Section 3.3 (mechanism analysis) and Fig. 3 in the manuscript have already shown that DAC-DETR suppresses the contradiction: **1)** the number of queries gathered to each object is enlarged, and **2)** the best query for each object becomes even closer to the corresponding object. * * * **CkgD-Q3: The model architecture is not adequately simple, which affects its usability.** **[Ans]:** We respectfully recall that our method only adds small complexity (C-Decoder) to the baseline structure. C-Decoder is simple: it replicates the structure of the original decoder (O-Decoder) and removes the self-attention layers. Our architecture complexity is comparable (usually lower), compared with recent state-of-the-art methods, *e.g.*, Hybrid-DETR adds a parallel decoder branch that have more quires, Group-DETR replicates the original decoder for multiple times. Correspondingly, our DAC-DETR has higher training efficiency compared with recent state-of-the-art methods (Section A.3 in the appendix). * * * **CkgD-Q4: In Fig.1 (b,c), which module is used to obtain the classification score and bounding box of the object? Why "cls" of objects in Fig.1(b) is different from "cls" scores in Fig.1 (c). Does the same phenomenon occur at the last layer of the DETR decoder and occur at the DINO?** **[Ans]:** Thanks for the questions. **1)** The classification score and the bounding box prediction are obtained at the last (6-th) decoder layer. **2)** The classification score difference is because in Fig.1 (b) removes all the self-attention layers in the decoder of the already-trained model in Fig.1 (c). Correspondingly, the detector loses the capability to suppress duplicates (L36 in the manuscript) and thus increases the predicted score for multiple queries. **3)** Fig. 1 already uses the last layer. The phenomenon is similar in earlier layers and also occurs in DINO (as visualized in the supplementary PDF for rebuttal). * * * **CkgD-Q5: Analysis on whether the contradiction problem in Figure1 is solved by the corresponding method.** **[Ans]:** Thanks. Section 3.3 (mechanism analysis) and Fig. 3 in the manuscript have already shown that DAC-DETR suppresses the contradiction. The evidences are **1)** the number of queries gathered to each object is enlarged and **2)** the best query for each object becomes even closer to the corresponding object. We note that Fig. 3 is a statistic across the whole validation set and is more general than single sample visualization. * * * **CkgD-Q6: In Table 3. why the AP of variant-2\&3 is reduced?** **[Ans]:** Thanks for this question. We guess your question is why the AP of variant-2 and variant-3 is even slightly lower than the baseline. This is reasonable. **1)** Variant-2 re-adds the self-attention layers into the C-Decoder, and thus can be viewed as having two O-Decoders. Applying additional one-to-many matching for the O-Decoder makes the model prone to duplicate predictions and thus reduces the accuracy. **2)** Variant-3 uses one-to-one matching for C-Decoder, while C-Decoder has no self-attention and is naturally prone to make duplicate (one-to-many) predictions. In other words, in Variant-3, the supervision has some conflict with the characteristic of C-Decoder (L255 in the manuscript), therefore reducing the accuracy. * * * **CkgD-Q7: The description of the one-to-many matching in section 3.2 is a bit too abstract. This is a key component of the proposed method, from Table 4, and the major difference with the existing method.** **[Ans]:** We respectfully disagree. The one-to-many label assignment is only a technical detail for training our C-Decoder and is indeed simple: we calculate the matching score (Eqn. 4) and assign the positive labels to highly-scored queries. Our core contributions are the discovery of the opposing "gather $\leftrightarrow$ disperse" phenomenon and the corresponding C-Decoder solution. The C-Decoder is not restricted to any particular one-to-many method and is potential to utilize better one-to-many method for further improvement. * * * **CkgD-Q8: In Table 4, compared with one-to-many Hungarian Loss, the increase of AP is due to your matching score introducing an IoU score.** **[Ans]:** We apologize for causing a confusion. In Table 4, two editions of DAC-DETR ("w/ one-to-many Hungarian" and ours) actually use the same matching score definition (Eqn. 4). Their only difference is how to assign positive labels: the former uses set-to-set Hungarian while the latter (ours) uses the "threshold + ranking" criterion. Therefore, our superiority is not because better matching score, but is because C-Decoder favors the threshold strategy (L265 in the manuscript). We will clarify this point in the manuscript.
Summary: This paper proposes a simple modification to the DETR architecture that improves upon several prior implementations. The paper identifies that the single decoder approach causes the model to try and achieve opposing objectives in terms of the query coverage and deduplication, and proposes a solution to the problem : an additional decoder having only cross attention layers is employed which improves performance on a standard benchmark (COCO). Strengths: ## Originality and Significance The findings about the self attention performing deduplication and cross attention gathering information from the image were already knows (as mentioned in this paper). The primary contribution here are * Identifying that these two effects actually are in competition with each other, causing the training speed and final performance to be affected. * Proposing a solution by adding an additional decoder having only cross attention. * Displaying that this approach can be combined with several DETR variants and improves performance while reducing training time in all those cases * Ablations showing that freezing weights in the O-decoder, adding self attention layers in C-decoder and removing the one-to-many matching all reduce performance, supporting the findings of the paper. Adding an additional cross-attention only decoder is a simple modification that can be implemented on top of most other DETR variants, making the finding useful and significant. Since the added layers share weights with the original decoder, the size of the model does not increase, and the additional decoder is not used during inference, ensuring that there is no latency in inference time due to the proposed modification. ## Clarity and Quality * Clear figures with descriptive stand-alone captions * The findings are clearly explained with supporting evidence Weaknesses: * No analysis on if there is any difference in performance on large/small objects and common/rare objects * Some qualitative examples of difference between the proposed approach and the baselines would be helpful Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: 1. What happens if you try to use the predictions from the C-decoder during inference as well? For instance by combining the predictions from the two decoders? Does it further improve performance? 2. Line 172 "and the second one is to suppress the label imbalance regarding different objects." Could you explain more clearly how this is achieved? 3. In Figure 3, is this average number of queries computed on the C-decoder or the O-decoder? Suggestions: 1. Add an explanation of what t is in Figure 3 caption and change the y axis label to "avg number of queries / object" 2. The scale of the figures in Figure 3 should be kept constant, otherwise at first glance it looks like the gap increases at higher threshold, which is not the case 3. Writing suggestions: In several places, the phrase "the contrary" is used in a gramatically incorrect manner (divide the cross attention from the contrary / their contrary impairs ... etc) and I'd like to suggest that would be better substituted with the following phrasing: "To improve the training efficacy, we propose a Divide-And-Conquer DETR (DAC-DETR) that divides the cross-attention out from this contrary for better conquering" --> "To improve the training efficacy, we propose a Divide-And-Conquer 6 DETR (DAC-DETR) that separates out the cross attention to avoid these competing objectives." "have some contrary impacts on the object queries." -> "Have some opposing effects on the object queries" "These two impacts are both critical for DETR (as explained later), and their contrary impairs the training efficacy." -> "These two impacts are both critical for DETR (as explained later), and their contrasting effects impairs..." "In spite of the contrary, these two effects are both important for DETR." --> "In spite of the conflicting /rival objectives, we find that these two effects ...." And similarly for other occurrences of the word contrary. L169 grount-truth -> ground-thruth Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **LpXx-Q1: Analysis on if there is any difference in performance on large/small objects and common/rare objects.** **[Ans]:** Thanks for this suggestion. The performance gain on large/small objects is slightly larger/smaller (as shown in Table 1 in the manuscript), and the performance gain on common/rare objects is relatively smaller and larger (as shown in the table below). The details are as below: 1) Large objects vs small objects. Table 1 in the manuscript already provides detailed results on small ( AP$_S$ ), medium ( AP$_M$ ) and large ( AP$_L$ ) objects. It is observed that AP$_L$ has slightly larger improvement than ( AP$_S$ ). For example, on the Deformable-DETR baseline, the improvement on AP$_L$ and AP$_S$ is +4.3 AP and +3.0 AP, respectively. This is because small objects are inherently hard to detect and improving AP$_S$ is more difficult. 2) Common objects vs rare objects. We compare two most-frequent classes ('person' and 'car') against two rarest classes ('toaster' and 'hair direr') in COCO in the table below. It is observed that the improvement on the rare classes (+6.1 AP) is larger than on the common classes (+3.8 AP). We infer it is because the benefit of DAC-DETR (*i.e.*, improving the gathering effect) is more significant on rare classes. | Method | Common (person \& car) | Rare (toaster \& hair direr) | | :---: | :---: | :---: | | Baseline | 49.6 | 22.2 | | Variant-4 | 53.4 (+3.8) | 28.3 (+6.1) | * * * **LpXx-Q2: Some qualitative examples of difference between the proposed approach and the baselines would be helpful.** **[Ans]:** Thanks. Section A.6 (appendix) provides some qualitative examples of our DAC-DETR and the baseline. In the visualized examples, DAC-DETR shows better IoU and higher confidence on some hard examples (*e.g.*, occluded zebra, a cat wearing the hat). We think this is because DAC-DETR can improve both the quantity and quality of the queries for each object. * * * **LpXx-Q3: What happens if you try to use the predictions from the C-decoder during inference as well? For instance by combining the predictions from the two decoders? Does it further improve performance?** **[Ans]:** Thanks for this good suggestion. During rebuttal, we combine the predictions from the O-Decoder and C-Decoder and find it brings slight improvement (e.g., +0.3 AP). Specifically, we average the predicted logits of two decoders, use softmax to transform the averaged logits into classification scores and then use NMS to suppress the duplicate detections. On the Deformable-DETR baseline, while our DAC-DETR already achieves 47.1 AP (12 training epochs on CoCo), combining C-Decoder and O-Decoder brings another round of +0.3 AP improvement. We note that this improvement might become smaller on higher baselines and it considerably increases the inference cost. * * * **LpXx-Q4: Line 172 "and the second one is to suppress the label imbalance regarding different objects." Could you explain more clearly how this is achieved?** **[Ans]:** Thanks for your kind reminder. It is because an inherently easy-to-recognize object tends to attract many queries, therefore having many high-scored queries (which is also observed by DETA). Only using the threshold will make the easy-to-recognize objects have much more positive queries than the hard-to-recognize objects have, therefore causing label imbalance. Adding the top-$k$ selection suppresses the imbalance problem to some extent. * * * **LpXx-Q5: In Fig.3, is this average number of queries computed on the C-decoder or the O-decoder?** **[Ans]:** Thanks for the question. For our DAC-DETR, the average number of queries are computed on the C-decoder (no self-attention). As for the baseline, we remove its self-attention layers in the decoder on the already-trained model. It allows us to make a fair comparison on the gathering effect of two models. We will add these details to the manuscript. * * * **LpXx-Suggestions on figures and writings:** **[Ans]:** We sincerely thank your valuable suggestions and kind reminders. These suggestions help a lot in refining our manuscript. We will go through these details and make the revisions as below: 1) In Fig.3 , $t$ is the threshold on the matching score. We will add the explanation into the caption and change the y axis label to "avg number of queries / object". 2) We will make the scale of the figures constant in Fig. 3. 3) We appreciate that you pointed out the use of word "contrary" is actually inaccurate and grammatically incorrect. We will replace it with the suggested words (*e.g.*, conflicting objectives, opposing). We will carefully go through these typos and make our expression accurate. * * *
Rebuttal 1: Rebuttal: **General response** We thank all the reviewers for their valuable comments. We provide point-to-point responses to each reviewer, as well as a supplementary PDF for some visualization results. Pdf: /pdf/44629ce49151eef158dcdd1efdb59e46ac4e06f9.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: This paper reveals the “gather ↔ disperse” effects between cross-attention and self-attention layers in DETR decoder and proposes to add a decoder as the auxiliary branch without the self-attention blocks. The proposed approach achieves competitive detection performances. Strengths: 1. this paper leverages the opposite effects of self-attention and cross-attention to build a auxiliary decoder and improves the detection performances Weaknesses: 1. how general does the gather ↔ disperse effect apply? is there more examples beyond Figure 1 2. it's unclear if this can be combined with other practices such as stable matching and memory fusion to further improve the performances Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. Is there any similarity between “gather ↔ disperse” phenomenon vs. the evolution algorithm with “prune out ↔ grow back”? 2. can this be combined with other practices such as stable matching and memory fusion to further improve the performances? 3. is it possible to add the C-decoder with self-attention module only in Table 3 for the completeness? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: adequate Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **9psD-Q1: How general does the "gather$\leftrightarrow$disperse" effect apply? Is there more examples beyond Fig.1 ?** **[Ans]:** Thanks for this good question. During rebuttal, we make a statistic across 20\% randomly-sampled images in COCO-2017 and validate that the "gather$\leftrightarrow$disperse" phenomenon is general. Specifically, we employ **1)** the averaged Euclidean distance between queries around the same object and **2)** the averaged IoU between each query and its nearby object, to measure the feature distance and positional distance, respectively. The results show that for the three cases (before the decoder, without decoder self-attention layers, and with decoder self-attention layers), the averaged feature distances are 22.58 $\rightarrow$ 12.35 (gather) $\rightarrow$ 14.13 (disperse) and the averaged IoU scores are 0.41 $\rightarrow$ 0.72 (gather) $\rightarrow$ 0.62 (disperse). These statistical results are consistent with our observation in Fig. 1 in the manuscript and show the "gather$\leftrightarrow$disperse" phenomenon is general. Moreover, we provide more visualizations in the supplementary PDF. * * * **9psD-Q2: It's unclear if this can be combined with other practices such as stable matching and memory fusion to further improve the performances** **[Ans]:** Thank you for this suggestion. During rebuttal, we combine our DAC-DETR with the stable matching and memory fusion and observe clear (+0.7 AP) improvement. Specifically, DAC-DETR achieves 50.0 AP on the DINO baseline (COCO, 12 training epochs, ResNet-50 backbone). Adding the stable matching and memory fusion brings another round of +0.7 AP (50.0 $\rightarrow$ 50.7) improvement, showing good compatibility. We note that the Stable DINO's official code has not been released and the above experiment is based on our own implementation. Using its future official code might bring better results. * * * **9psD-Q3: Is there any similarity between "gather$\leftrightarrow$ disperse" phenomenon vs. the evolution algorithm with "prune out$\leftrightarrow$grow back"?** **[Ans]:** Thanks. Constructing analogy between the "gather$\leftrightarrow$disperse" phenomenon and the "prune out$\leftrightarrow$grow back" operation in evolution algorithm is insightful and interesting: the "disperse" is indeed similar to the "prune out". There is difference as well: in evolution algorithm, what have been pruned out are permanently removed and what grow back are actually new$^{[1]}$. In contrast, in DETR, the queries are not removed and will be gathered again, yielding conflict that compromises the training efficiency. [1] Torsten Hoefler, et al. Sparsity in deep learning: Pruning and growth for efficient inference and training in neural networks, 2021. * * * **9psD-Q4: Is it possible to add the C-decoder with self-attention module only in Table 3 for the completeness?** **[Ans]:** Thanks. Following your suggestion, we build a variant whose C-Decoder has only self-attention layers (no cross-attention layers). As shown in the table below, this variant (Variant-4) is even lower than the baseline by -2.4 AP. It is reasonable because the cross-attention layer is prerequisite for the detector decoder to derive the object information. We will add the result into Table 3 (variants comparison) in the manuscript. | Method |Backbone | epochs | AP | AP50 | | --- | :---: | :---: | :---: | :---: | | Baseline | R50 | 12 | 43.7 | 63.0 | | Variant-4 | R50 | 12 | 41.3 | 60.3 | | DAC-DETR (ours) | R50 | 12 | 47.1 | 64.8 |
null
null
null
null
null
null
Systematic Visual Reasoning through Object-Centric Relational Abstraction
Accept (poster)
Summary: The paper proposes to combine an object-centric representation model (Slot Attention) with a relational reasoning module to create the Object-Centric Relational Abstraction (OCRA) model. For this model, Slot Attention is first pre-trained in an unsupervised way to represent objects separately. Then, the relational reasoning module, consisting of a "relational bottleneck" and a transformer model, is applied to these object-centric representations. This reasoning model is trained in a supervised way on a subset of possible objects. The experiments on three different datasets show that this approach generalizes better to previously unseen objects than comparable baselines. Strengths: The paper is written well and easy to follow. The introduction provides a strong motivation for this line of research. The proposed approach and conducted experiments are described clearly and with sufficient detail, such that it should be possible to reproduce them. The proposed approach seems to be novel and achieves strong generalization performance on previously unseen objects. Weaknesses: In my eyes, the main weakness of this paper is that its contribution relative to existing work is not entirely clear. 1) Most importantly, the paper positions itself as proposing an approach with a stronger inductive bias toward relational abstraction, which ultimately leads to better generalization performance (for example, introduction (l.46) and related work section (l. 135, l. 170). However, the experiments do not compare against many existing approaches. Without such a comparison, it remains unclear whether the existing approaches actually suffer from the problem that the proposed approach claims to solve. Thus, ideally, the paper should provide such a comparison, or alternatively adjust the positioning of the proposed approach relative to existing work. 2) Besides this, it remains somewhat unclear which parts of the proposed approach are building on existing work (such as Slot Attention) and which parts are new. For example, do equations (1) and (2) and sections 2.2 and 2.3 describe novel architectural components or have they been used in a similar fashion before? --- 3) The related work section fails to cite many relevant papers. Object-centric representation learning has seen growing interest in recent years, with many papers proposing various solutions. I would recommend taking a look at the related works section of [1], which provides a comprehensive overview of papers that are relevant to this work. Besides this, [2] describes a recent model that combines object-centric representation learning with relational reasoning capabilities and [3] describes generalization properties of Slot Attention to previously unseen objects. 4) The experimental results could be strengthened by applying all baselines across all datasets. Additionally, why is the CoRelNet not included in the baselines? It seems like one of the most relevant existing approaches. --- Minor points: 5) Equation (2): What does adding $pos$ to the position embeddings $m_k$ add for the relational reasoning module? It is not dependent on the input, and cannot be adjusted by the relational reasoning module as it is part of the pre-trained, frozen SlotAttention model. Thus, I would expect that it doesn't provide much useful information. 6) Equation (4): $m_k$ will implicitly contain some information on the object shape, since $attn$ is used in Eq (2). Do you think this could allow the model to circumvent the relational bottleneck, if $m$ would be processed by some more powerful non-linearities? 7) Line 115: "endowing OCRA with an explicity variable-binding mechanism". What do you mean with this statement? 8) line 67: what is a "position-wise fully-connected layer"? Are you referring to the 1x1 convolutions in Table S2? In general, I think it would be helpful to draw direct connections between the components of the architecture described in Table S2 and the components described in the main text. 9) Eq. (1) + (2): what is the $\cdot$ symbol referring to here? In Eq. (3), this symbol is used to describe a dot procuct, which would not make much sense here, I think. 10) Lines 255-262: I would move most of this description into the method section. 11) Table 2: Use the same order for the different tested conditions in the table as is used within the text. --- [1] Jiang, J., Deng, F., Singh, G., & Ahn, S. (2023). Object-centric slot diffusion. arXiv preprint arXiv:2303.10834. [2] Wu, Z., Dvornik, N., Greff, K., Kipf, T., & Garg, A. (2022). Slotformer: Unsupervised visual dynamics simulation with object-centric models. ICLR 2023 [3] Dittadi, A., Papa, S., De Vita, M., Schölkopf, B., Winther, O., & Locatello, F. (2021). Generalization and robustness implications in object-centric learning. ICML 2022 Technical Quality: 3 good Clarity: 3 good Questions for Authors: As described above, I believe the main weakness of this paper is its current positioning relative to the existing research landscape. If the authors could improve this point, or strengthen the current claims by providing additional experiments, I would be happy to adjust my score. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: While the relational bottleneck improves the model's ability to generalize to previously unseen objects, I would expect it to also limit the model's applicability to other settings. For example, in the experiments, a separate model is applied for each separate task and thus for each separate relation to be learned. I would expect that the relational bottleneck would harm the model's performance if it had to differentiate several relations at once. Additionally, I would expect it to be more difficult to represent more complex relations between objects, as the slightly worse results on the spatial relations in the SVRT dataset might indicate. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their helpful suggestions and comments. We provide a point-by-point reply below: 1. Positioning relative to previous work: - We have endeavored to cite all relevant previous work in the introduction and related work sections. While it was not feasible to implement baselines based on each and every one of these references, we have also done our best to select a representative subset of baselines that capture the important computational features of this previous work, and that are likely to represent the strongest possible baselines in the context of the abstract visual reasoning tasks we consider. Specifically, the slot transformer baseline is extremely similar to models from the references cited on lines 46 and 135, and was modeled closely on the implementation of [1] (ref 25 in the paper). The interaction network in our slot-IN baseline was highly similar to the models from the references cited on line 170, and was based closely on [2] (ref 43 in the paper). We believe that these baselines capture the important properties of the referenced papers, but we would be happy to implement specific additional baselines at the reviewer’s suggestion. 2. Clarifying novel components of proposed model: - The factorization of slot attention into separate spatial and non-spatial components (equations 1 and 2) is indeed a novel feature of our model. The relational embeddings described in section 2.2 also constitute a novel aspect of the model. Section 2.3 describes passing these relational embeddings to a transformer, which is a novel use of transformers. Overall, the architecture does employ some components from previous work (slot attention, transformers), but they are configured in a novel way to form a broader architecture with very different properties than the individual components alone. 3. Missing references: - We thank the reviewer for bringing these references to our attention. We agree that they are highly relevant to this work, and have added them to the Related Work section, along with a few other missing references that we identified from [3]. 4. Systematic evaluation of all baselines, and addition of CoRelNet baseline: - We agree with the reviewer’s suggestion to more systematically evaluate all baselines on all tasks. We have now included this evaluation in the revised paper (see Figures 1-3 of attached PDF). **We found that OCRA performed better than the baseline models in most settings, with the best average performance across all tasks, and especially strong performance in the most extreme generalization regimes.** - **We have also added a slot-CoRelNet baseline**, by combining slot attention with CoRelNet. This baseline did not perform very well, likely due to the random permutation of objects in slot attention. Minor points: 5. Role of positional embeddings in relational reasoning: - The positional embeddings are used to index the objects over which relations are computed. In other words, our relational embeddings can be interpreted as representing information such as ‘the object at location $i$ has the relation $R$ with the object at location $j$’, where $i$ and $j$ are spatial locations, and $R$ is a relation such as ‘same shape’. These positional indices are necessary to keep track of which relations correspond to which objects. 6. Potential to break bottleneck through position embeddings: - It is very important to our formulation that $m_k$ be a *linear* combination of the position embeddings. A nonlinear transformation of the position embeddings could indeed result in the extraction of implicit shape information, and therefore break the relational bottleneck. We have added this important clarification to our description of the model. 7. Meaning of ‘explicit variable-binding mechanism’: - This refers to the fact that our relational embeddings use positional information as indices, as described in the response to point #5 above (i.e., the relations are bound to explicit indices for the corresponding objects). 8. 'Position-wise fully-connected layer' vs. '1x1 convolutions': - Yes these refer to the same components. We agree this is confusing, and have revised the paper to consistently use the term ‘1x1 convolutions’. 9. Clarification about equations 1 and 2: - Thanks to the reviewer for catching this mistake. These are simply matrix multiplications. They have been reformatted as follows (where $\boldsymbol{attn}_{T} \in \mathbb{R}^{K\times N}$, $\operatorname{flatten}(\boldsymbol{feat}) \in \mathbb{R}^{N\times D}$, and $\operatorname{flatten}(\boldsymbol{pos}) \in \mathbb{R}^{N\times D}$): $z_{k} = \boldsymbol{attn}_{T} \operatorname{flatten}(\boldsymbol{feat})[k]$ $m_{k} = \boldsymbol{attn}_{T} \operatorname{flatten}(\boldsymbol{pos})[k]$ 10. Moving lines 255-262: - We have now moved these lines to the supplementary. 11. Ordering of entries in Table 2: - We have reordered the table accordingly. Limitations: - It is an interesting question the extent to which the proposed model could simultaneously model multiple relations. In future work, we plan to investigate how the model performs in multi-task settings. One possibility is to develop a 'multi-head' version of the current model, which may be better suited to this setting. We have added some discussion of this issue to the revised paper. - In the case of the SVRT spatial relation tasks, the model's performance was primarily limited by suboptimal object segmentation in our slot attention module. This was partly remedied by training slot attention for longer (see updated results in attached PDF). But we agree that it would be interesting in future work to evaluate the model on more complex real-world relations. [1] Mondal et al. (2023). Learning to reason over visual objects. [2] Watters et al. (2017). Visual interaction networks: Learning a physics simulator from video. [3] Jiang et al. (2023). Object-centric slot diffusion. --- Rebuttal Comment 1.1: Comment: Thank you for your response. I propose that the authors incorporate their responses to points 1 and 2 into their paper. I believe that doing so will better highlight its contributions. Moreover, the limitations I previously noted could enrich the paper if acknowledged explicitly. All my other concerns have been addressed adequately, and I will update my rating to "weak accept". --- Reply to Comment 1.1.1: Title: Further revisions Comment: We thank the reviewer for these suggestions. We agree that adding additional discussion of these issues would further improve the paper. To address this, we have made the following revisions: - At the end of Related Work, we have added the following statement clarifying the connection between the baselines and previous work (Note that the references are those that were cited on lines 135 and 170 of the original submission): ‘To empirically evaluate the importance of the relational bottleneck, we compare our approach with a set of baseline models that capture the key computational properties of previous models, including a slot-transformer baseline that combines slot attention with a transformer reasoning module [9, 34, 46, 25], and a slot-interaction-network baseline that combines slot attention with an interaction-network reasoning module [43, 39, 15, 47, 2, 19, 35, 41, 12, 36].’ - We have also added the following statement to the end of Related Work, to clarify the novel aspects of our proposed approach: ‘To summarize our contributions relative to previous work, our proposed model includes: 1. A novel object representation format that is factorized into distinct feature and position embeddings (Equations 1 and 2), enabling a form of explicit variable-binding. 2. A novel relational embedding method that implements a relational bottleneck (Equations 3 and 4). 3. An architecture that combines these elements with preexisting components (slot attention and transformers) in a novel manner to support systematic visual reasoning from complex multi-object visual displays.’ - Finally, we have now revised the Conclusion to more explicitly address the limitations identified in this and other reviews: ‘6. Limitations and Future Directions In the present work, we have presented a model that integrates object-centric visual processing mechanisms (providing the ability to operate over complex multi-object visual inputs) with a relational bottleneck (providing a strong inductive bias for learning relational abstractions that enable human-like systematic generalization of learned abstract rules). Though this is a promising step forward, and a clear advance relative to previous models of abstract visual reasoning, it is likely that scaling the present approach to real-world settings will pose additional challenges. First, real-world images are especially challenging for object-centric methods due to a relative lack of consistent segmentation cues. However, there has recently been significant progress in this direction [33], in particular by utilizing representations from large-scale self-supervised methods [50, 7], and it should be possible to integrate these advances with our proposed framework. Second, the current approach assumes a fixed number of slot representations, which may not be ideal for modeling real-world scenes with a highly variable number of objects [51]. Though we did not find that this was an issue in the present work, in future work it would be desirable to develop a method for dynamically modifying the number of slots. Third, OCRA’s relational operator is applied to all possible pairwise object comparisons, an approach that may not be scalable to scenes that contain many objects. In future work, this may be addressed by replacing the transformer component of our model with architectures that are better designed for high-dimensional inputs [16]. Finally, it is unclear how our proposed approach may fare in settings that involve more complex real-world relations, and settings that require the discrimination of multiple relations at once. It may be beneficial in future work to investigate a ‘multi-head’ version of our proposed model (analogous to multi-head attention), in which multiple distinct relations are processed in parallel. A major challenge for future work is to develop models that can match the human capacity for structured visual reasoning in the context of such complex, real-world visual inputs.’
Summary: - The paper seeks to tackle various visual reasoning problems with a focus on systematic generalization. - The paper argues that we need explicit inductive bias to extract object-object relationships and express these relationships via a low-capacity representation. - The paper seeks to do this by first extracting object vectors (or slots) using pre-trained slot attention. It then performs a dot-product of these slots to obtain a “relation embedding” — which is really a scalar number expressed as an embedding (as far as I understand). These relation embeddings are then given to a transformer and the whole model (except the slot attention) is trained to classify whether the given input is conformant to the true pattern or not. - The performance is then evaluated on a systematically OOD test set and compared with various baselines and model ablations on 3 visual reasoning benchmarks: ART, SVRT, and CLEVR-ART. Here, CLEVR-ART is a novel dataset proposed within this paper. Strengths: 1. Simple and elegant architecture. 2. Propose a new dataset for visual reasoning and systematic generalization called CLEVR-ART, a sort of visually complex variant of ART. This appears to be a useful contribution to facilitating progress in the community. 3. Good systematic generalization performance compared to baselines. 4. Several useful baseline comparisons. For instance, the paper shows that a standard transformer when applied to slots is not enough to systematically generalize and the proposed relation layer is useful. 5. Useful ablations. For instance, the paper shows that all of the following are useful: relation bottleneck, relation embedding, object-centric inputs, decomposing position and appearance, etc. Weaknesses: I have several questions and possible avenues for improvements that I describe in the “Questions” section. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: 1. Also, what part of the architecture is enforcing the “relational bottleneck”? Is it due to the fact that the dot product of two slots results in a scalar number? If yes, then this should be highlighted somehow both in the text and perhaps also in Figure 1. One may also conduct an experiment in which the “bottleneck” dimension is gradually increased (e.g., 1 to 2 to 5 and so on) and show that this gradually worsens the generalization performance. 2. Line 30: Regarding learning relational abstractions: Is it possible to take a large batch of inputs, extract relation “embedding” and visualize object pairs that have similar relational “embedding”? This would give more credence to the fact that relational abstractions are indeed being inferred. 3. Line 108: Is it necessary to say “shared” here? IMO, I wouldn’t have assumed that the projection matrices were not shared (based on the equations). 4. It should be better highlighted in the introduction section and the abstract that CLEVR-ART is a novel contribution. This should also be discussed in the related work relative to the existing visual reasoning benchmarks. It would also be useful to make a statement about whether this dataset will be released to the research community or not. 5. Table 1: Why are several baselines shown for ART not shown for CLEVR-ART and SVRT? I think these baselines will be useful to show for all datasets. Also, I would suggest using a consistent format for reporting the results of all 3 datasets i.e., picking either the bar-plot format or the tabular format. 6. L263: In the case of inputs involving multiple images, how are they processed by the network? As I understand, the model does not sequentially consume multiple images. Does “inserted” mean programmatically generating the candidate images, each containing multiple objects? 7. L283-298. I find the discussion of the results rather small. For instance, what is the rationale for the comparison with ESBN or GAMR? What is the key distinguishing characteristic of those baselines with respect to the proposed one? What can we learn from the result of this comparison i.e., why does the proposed model outperform? 8. (Minor) Line 34: Would be good to cite the paper(s) that support this statement “By biasing architectures to process visual inputs in terms of relations between objects, these recent approaches have achieved strong systematic (i.e., out-of-distribution) generalization of learned abstract rules, given only a small number of training examples.” I am giving a score of 6 in the hope that the authors will try and address some of my questions and concerns. If addressed to some degree, I will happily increase the score. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 4 excellent Contribution: 4 excellent Limitations: Yes, the limitations are discussed in the last line of the conclusion. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their helpful suggestions and comments. We provide a point-by-point reply below: 1. Mechanism enforcing the relational bottleneck: - The reviewer asks whether the dot product enforces a relational bottleneck only because it compresses inputs to a single dimension. To investigate this, we performed an ablation experiment, in which the dot product was replaced with a linear layer that projected the concatenated pair of input embeddings to a single dimension. We found that this model performed very poorly on the $m=95$ (most difficult) generalization regime for the RMTS (50.7% test accuracy) and ID (48% test accuracy) tasks. Thus, the relational bottleneck does not result from the unidimensional nature of the dot product per se. Instead, we believe that the most important feature of the dot product is that it is inherently relational, in that it is based on a multiplicative interaction between the two input embeddings. By contrast, when using a linear layer (or MLP, as in the ‘- Relational Bottleneck’ ablation model) to model relations, there is nothing to prevent the model from simply copying individual object features into the ‘relational’ embedding (thus making it not really relational). We have included the result of this ablation experiment in the revised paper, along with a discussion of the implications (we are unfortunately not able to include the specific revisions in this reply due to space constraints). 2. Visualization of relational embeddings: - This is an excellent suggestion. We regrettably did not have time to carry out this analysis during the rebuttal period, but we will try to do so before the end of the discussion period and share the results here time permitting. 3. Specification of ‘shared’ weight matrices: - We agree that the ‘shared’ nature of the weight matrices (mentioned in line 108) most likely doesn’t need clarification. However, our concern was that some readers may expect these to be separate matrices, in line with the common use of separate ‘key’ and ‘query’ matrices in self-attention. We included this clarification to avoid that potential confusion. 4. Clarification about novelty of CLEVR-ART and release to public: - We have added the clarifications to the abstract and introduction concerning the novelty of CLEVR-ART. We have also added discussion of CLEVR-ART relative to other visual reasoning benchmarks to the Related Work section (specific revisions omitted here due to space constraints). - We do plan to make the dataset publicly available upon acceptance of the paper. 5. Systematic testing of baseline and consistent format for results: - We thank the reviewer for these suggestions. We have now evaluated all baselines on all three of these benchmarks (with the exception of Attn-ResNet, for which we do not have source code). The results can be found in Figures 1-3 of the attached PDF. **We found that OCRA performed better than the baseline models in most settings, with the best average performance across all tasks, and especially strong performance in the most extreme generalization regimes.** These results will be included in the final version of the paper, and we will also make sure to report all of these results as figures. 6. Evaluation on multi-object inputs: - The reviewer is correct that our approach involved ‘programmatically generating the candidate images, each containing multiple objects’. We have described this approach in line 199 of the original submission: ‘*As originally proposed, the ART dataset involved pre-segmented visual objects. Here, we investigated a version of this task involving multi-object visual displays (see Supplementary Section S1 for details).*’ 7. Limited discussion of results: - We agree that more discussion of the results would be informative. Briefly, many of the baseline models do not contain a relational bottleneck (Transformer, IN, RN, GAMR, ResNet), and their ability to generalize is therefore limited to iid settings (e.g., low values of $m$ for the ART tasks). The comparison with these models is thus intended to evaluate the importance of the relational bottleneck, which enables stronger performance in the OOD setting. The ESBN and CoRelNet architectures *do* include a relational bottleneck, but they are not designed with multi-object inputs in mind, and they therefore perform poorly on these tasks in all regimes (even when combined with our slot attention module). This is due primarily to the random permutation of slots in slot attention, which motivated our positional embedding variable-binding scheme (to keep track of which relations correspond to which pairs of objects). We have added discussion of these issues to the Results section (specific revisions omitted here due to space constraints). 8. Citing papers to support statement about OOD generalization: - This statement is referring to the same papers cited at the beginning of the paragraph (line 30). We have added references to the revised paper to clarify this. --- Rebuttal Comment 1.1: Title: Thank You Comment: Thank you for the response! I appreciate the response that it is the dot product that is playing the key role as evidenced by the failure of the ablation in which slot pairs were concatenated and mapped to single-dimension. Although I find this outcome very intriguing and interesting, I am not fully clear why it does work. Is there something inherent about multiplicative interaction that leads to this? For instance, would the outer product also work similarly? Another question is: Consider that $\mathbf{z}$ contains the appearance information but not spatial position information while $\mathbf{m}$ contains spatial position information but not appearance information. Then, I think that Eq. 4 should lose information about which object appearance is associated with which spatial position. Is this true and is this by design? I also read other reviewers' comments, especially the one asking for the CoRelNet baseline. I think this was a valid point. I agree that this baseline should have been compared. I appreciate the authors' addition of CoRelNet results and I also agree with the authors' response that CoRelNet should struggle to handle permutation invariant representations e.g., slots of slot attention. However, since I missed noticing this related work the first time, I am now less confident that I fully grasp all pieces of the related work. As such, I will reduce my confidence from 4 to 3. But I still maintain my rating of 6 to the best of my understanding. --- Reply to Comment 1.1.1: Comment: Thank you for the continued engagement. We have now carried out the analysis that was suggested in question 2 of the initial review (visualization of relational embeddings). We performed this analysis specifically for the same/different task, in the $m=95$ generalization regime. This was the most difficult generalization regime in our task, so we thought this would be the strongest test of whether the relational embeddings truly captured abstract relations. We performed PCA on the relational embeddings (the output of equation 3 in the paper) for a set of 100 problems, and visualized the results along the first two principal components. The results formed two distinct clusters, one for pairs of objects with the same shape, and one for pairs of objects with different shapes. Thus, the relational embeddings identified the relevant relational abstraction (same/different), despite the fact that the inputs involved completely novel shapes not observed during training. We will include these results in the final version of the paper, and thank the reviewer for this suggestion. Inner vs. outer product: This is a great question. To test it, we performed an additional ablation experiment, in which the dot product in the relational operator (equation 3 of the paper) was replaced with an outer product. Note that this results in a matrix of size $D \times D$. We flattened this matrix, and passed it through a learned linear layer to reduce it to size $D$. This version of the model did not perform nearly as well as our primary model (see results in Table below). This suggests that our relational operator depends on *both* multiplicative interaction *and* compression. Ablation models that only have compression (the ablation model with a learned linear projection to one dimension) or only have multiplicative interaction (the new ablation model involving an outer product) do not perform as well. We will add these results and some additional discussion of these issues to the final version of the paper. We thank the reviewer for raising these issues, as we feel that they help to clarify the important factors underlying the implementation of the relational bottleneck. | | RMTS | ID | | -- | -- | -- | | OCRA (original) | **85.31 ± 2.0** | **92.80 ± 0.3** | | OCRA (outer product) | 62.84 ± 1.6 | 69.58 ± 1.1 | Clarification about equation 4: In equation 4, information about the appearance of *individual* objects is discarded, but information about the *relationship* between the appearance of two objects is preserved, and this is then associated with the spatial position of the two objects. The discarding of information about individual object appearance is indeed by design. Please let us know if we can provide further clarification. Regarding the related work, we would be happy to discuss any remaining issues that we can help clarify. We have made our best effort to address all of the concerns raised in the initial reviews and during the discussion period. If there are any issues that haven’t been sufficiently addressed, we would be happy to discuss further.
Summary: The research topic of this study is the development of a learning machine that achieves systematic generalization in reasoning over complex relations of objects in a visual input (still image). Toward this goal, the authors proposed a new neural network model (OCRA) taking inspirations from the recent results on effective inductive biases for systematic generalization in relational reasoning and methods for obtaining object-centric representations. More concretely, the proposed model comprised of three core components: first component (slot attention mechanism) to extract object-centric representation from a visual input containing multiple objects, the second component to compute pairwise relation embeddings, and the third component (transformer) to provide the final output related to the higher-order relations. The effectiveness of the proposed model was tested with three visual reasoning tasks, two existing and one new, and also compared with various baseline models. As a whole, the proposed model can be said the best in terms of systematic generalization among the compared models. An ablation study was also conducted to evaluate the roles of the components in the proposed model and pretraining. Strengths: The top-level idea behind the proposed model, that is, combining the inductive biases for relational reasoning and a method for obtaining object-centric representations is clear and reasonable. How to implement the idea with the three core components are explained fairly well. Although the top-level idea might look somewhat straightforward at first glance given the advancement in the two directions, the concrete implementation is not trivial, and the differences from the existing work are described in the paper. The effectiveness of the proposed model was tested with fairly rich experiments. Two existing task (ART and SVRT, both created with 2D shapes) and a new task developed based on the CLEVR (CLEVR-ART, created with 3D shapes) were used and various baseline models including a very recent one (GAMR, accepted at this year's ICLR) are compared with the proposed model. An ablation study gives additional value to this work. Development of a learning machine that achieves systematic generalization in complex visual reasoning is an important topic in AI. Although there is still a distance to the real-world applications as stated in Section 6 (Conclusion and Future Directions), the proposed method and the evaluation results will be of interest to the NeurIPS audience. Weaknesses: 1. A weak point of this submission is lack of the source code of the proposed model as Supplementary Material. It is also unkown whether the source code will be made publicly available if the paper get accepted. These points cast a shadow on the reproducibility. 1. A relatively weak point of the proposed model itself seems to be around the number of slots, $K$. First, as stated in line 117, the proposed model computes all $K^2$ pairwise relations. Second, in the current model, $K$ should be defined in advance. These characteristics can be obstacles in the reasoning over the relations of objects in the real-world visual inputs. Namely, the fist point might affect the applicability (scalability) to complex natural images and there should be some additional mechanism to determine an appropriate $K$ in the first place if the current structure of the proposed model is kept. 1. (Related to 2.) Although it is stated that there exists gap between the problems addressed in this study and the real-world vision in Section 6 (Conclusion and Future Directions), the details are not explained. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Questions 1. Can the source code of the proposed method be provided at the Author Rebuttal period? In addition, will the source code made publicly available if the paper get accepted? 1. What are your thoughts on the second point in Weaknesses section above? Suggestion It would be beneficial if the authors could add detailed descriptions about what kind of differences between the problems treated in this study and the real-world vision should be overcome, and also add explanations about the relation between those differences and self-supervised learning methods mentioned in Section 6 if possible. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: The authors stated in Section 6 (Conclusion and Future Directions) that the problems addressed in this study are still simple compared with the the real-world vision. However, currently it is not explained in details what kind of differences between the problems treated in this study and the real-world vision should be overcome. Please also refer to the Weaknesses and Questions sections above. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their helpful suggestions and comments. We provide a point-by-point reply below: 1. Lack of source code: - We thank the reviewer for bringing this to our attention. **We have now provided the AC with an anonymized link to the source code for our model. We will also make this code publicly available upon acceptance of the paper.** 2. Fixed number of slots and scalability: - A desirable feature of slot attention is that the number of slots can be set to the maximum number of expected objects, and slots that don’t correspond to objects will remain largely unutilized (e.g. if there are 6 slots but only 2 objects in an image, typically only 2 slots will end up being used). OCRA inherits this property from slot attention. Therefore, in the ART dataset, in which different problem types contain different numbers of objects (ranging between 2 and 6), we set the number of slots to $K=6$ for all problems. However, we agree that an ideal approach would involve autonomously adjusting the slots based on the number of objects present in an image. We have added some additional discussion of this issue to the paper (see excerpt from revised text below). - We also agree that computing all pairwise relations may pose a challenge when scaling to real-world scenes with many objects. In future work, we believe that this may be addressed by replacing the transformer component of our model with other methods that are better equipped to deal with very high-dimensional inputs, e.g. [1]. We have added additional discussion of this issue to the revised paper (see excerpt below). 3. Lack of detail on obstacles to scaling: - We agree that this issue deserves further attention. We have revised the discussion of these issues in the conclusion of the paper as follows (new text in bold, note also reference numbers are not the same as those in the paper): ‘In the present work, we have presented a model that integrates object-centric visual processing mechanisms (providing the ability to operate over complex multi-object visual inputs) with a relational bottleneck (providing a strong inductive bias for learning relational abstractions that enable human-like systematic generalization of learned abstract rules). Though this is a promising step forward, and a clear advance relative to previous models of abstract visual reasoning, it is likely that scaling the present approach to real-world settings will pose additional challenges. **First, real-world images are especially challenging for object-centric methods due to a relative lack of consistent segmentation cues. However, there has recently been significant progress in this direction [2], in particular by utilizing representations from large-scale self-supervised methods [3], and it should be possible to integrate these advances with our proposed framework. Second, the current approach assumes a fixed number of slot representations, which may not be ideal for modeling real-world scenes with a highly variable number of objects [4]. Though we did not find that this was an issue in the present work, in future work it would be desirable to develop a method for dynamically modifying the number of slots. Finally, OCRA’s relational operator is applied to all possible pairwise object comparisons, an approach that may not be scalable to scenes that contain many objects. In future work, this may be addressed by replacing the transformer component of our model with architectures that are better designed for high-dimensional inputs [1].**’ [1] Jaegle et al. (2021). Perceiver: General perception with iterative attention. [2] Seitzer et al. (2022). Bridging the gap to real-world object-centric learning. [3] Caron et al. (2021). Emerging properties in self-supervised vision transformers. [4] Zimmermann et al. (2023). Sensitivity of Slot-Based Object-Centric Models to their Number of Slots. --- Rebuttal Comment 1.1: Comment: Thank you very much for thoroughly considering my review. All of my questions and suggestions have been adequately responded. I raised the score for Presentation from 2 to 3 and that for Rating from 5 to 6. \# The link to the source code was shared by AC.
Summary: This work proposed a new method, named OCRA, that combines object-centric presentation learning (for object abstraction) and a relational bottleneck (for relational abstraction). Particularly, OCRA consists of three components: 1) a slot attention to extract object level representations, 2) a relational operator to get pairwise visual relations, and 3) a transformer to model higher-order relations. The slot attention model is pretrained on a large dataset and the relational modules are trained on a small task-specific dataset while freezing the slot attention. Experiments were performed on three datasets: ART, SVRT and CLEVR-ART to show the effectiveness of the proposed method. Strengths: 1. The idea of combining slot attention with a relational bottleneck (relation operator and transformer) for solving visual relational reasoning problems sounds interesting and also novel to me. 2. The presentation of the idea and the overall writing are very clear. 3. Experiments on synthetic datasets (ART and SVRT and CLEVR-ART) shows the proposed method can work for various relational reasoning tasks and it achieves better performance than baselines in many cases. Weaknesses: 1. For the benchmarks, I have a concern about their simplicity. For example, the performance of many methods on the ART dataset is close to 100%. Similar observations also exist in other two datasets. Does it mean we already have achieved human-level visual reasoning performance or we need better benchmarks to evaluate the success of methods? I think considering some more challenging benchmarks will make the results more convincing. For example, RAVEN [1] and Bongard-HOI [2] are two challenging benchmarks for testing a model's visual relational reasoning abilities. In particular, Bongard-HOI considers the real-world natural images. 2. In Figure 4, it is a little surprising that OCRA performs worse than ResNet on SVRT - spatial relations with 1000 training examples. Any intuition on this? 3. For the ablation studies, to test the impact of slot attention, it is too naive to do “feature map divided into a 4x4 grid”. How about comparing slot attention with some standard SSL trained object-centric representation learning methods? Also, from Table 2, we can see that Transformer has a much higher impact than the slot attention and relational bottleneck on both RMTS and ID. Does it mean the most important part of OCRA is the transformer rather than slot attention and relational bottleneck? If so, it downplays the method's significance a little, in my opinion. [1] RAVEN: A Dataset for Relational and Analogical Visual rEasoNing, CVPR 2019. [2] Bongard-HOI: Benchmarking Few-Shot Visual Reasoning for Human-Object Interactions, CVPR 2022. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: My major concerns and questions are in how the experiments support the effectiveness and significance of the proposed method. Please see the weaknesses part for more details. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: The authors have well addressed the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their helpful suggestions and comments. We provide a point-by-point reply below: 1. Synthetic vs. real-world benchmarks: - The primary focus of the benchmarks we consider is on the more extreme **out-of-distribution generalization and low-sample regimes**. In the most extreme generalization regime of the ART dataset ($m=95$ in Figure 1 of the attached PDF above), **most baselines perform very poorly, with many displaying near-chance performance**. The baseline performance on CLEVR-ART and the same/different SVRT tasks (especially in the low-sample regime, with only 500 training examples) is similarly poor. Thus, though these tasks don’t involve real-world images, they are nevertheless extremely challenging tests of out-of-distribution visual reasoning. - We also agree with the reviewer that it is important to extend models of visual reasoning so that they are capable of processing real-world visual inputs. We believe that this work takes a significant step in that direction, by proposing a model capable of strong **OOD generalization from multi-object inputs (ART and SVRT) and even photorealistic three-dimensional objects (CLEVR-ART)**. We agree that it will be important in future work to take this even further, by evaluating the model on real-world images, utilizing datasets such as Bongard-HOI. Though this poses a few challenges (which we discuss in the general response above), we do not see any fundamental obstacles to extending our model to these kinds of datasets, and plan to do so in future work. 2. Relatively poor performance on SVRT - spatial relations: - We found that OCRA’s poor relative performance on these tasks was related to the fact that the learned slot representations were not perfectly object-centric. This was partially remedied by training slot attention for longer. With further pretraining of slot attention, OCRA outperforms ResNet in the low-sample regime (500 training examples), and performs on par with ResNet in the high-sample regime (1k training examples; see Figure 3 in the attached PDF above). Note also that OCRA’s performance on these tasks is generally strong (around 90% test accuracy), and higher than it is for the same/different tasks (which were extremely challenging for many of the baselines). 3. Testing impact of slot attention and relational bottleneck: - To better test for the impact of slot attention, the reviewer proposes that we compare OCRA with a ‘standard SSL trained object-centric representation learning method.’ We are not sure which specific methods the reviewer has in mind. The purpose of the ‘- Slot Attention’ ablation is precisely to test for the impact of using object-centric representations, so this ablation model should not be object-centric. However, we agree that it is informative to compare to other self-supervised representation learning methods. We have now added a comparison with a state-of-the-art masked autoencoder model (MAE; using the code base from [1]) on the identity rules and distribution-of-3 ART tasks (it is not clear how to formulate the same/different and RMTS tasks in a generative manner). We applied this model to our tasks by masking out the final object in each problem (in the bottom right cell of the input), and training the model to fill in this patch. To select from the set of multiple choices, we then compared the model’s generated output with the four answer choices, and selected the choice with lowest mean-squared error. We found that this model performed considerably worse than OCRA, especially in the most extreme generalization regimes ($m=95$). We have included the results in a table below, and will also add them to the revised paper. If the reviewer had other methods in mind, we would be happy to test these and compare with our model if time permits. - The reviewer notes that the most impactful ablation is the ‘- Transformer’ model. This demonstrates that higher-order relations (relations between pairwise relations, which are extracted by the transformer in OCRA) are essential to the model’s performance. However, we note that ablating either slot attention or the relational bottleneck also has a severe effect on performance. For instance, in the RMTS task, performance drops by ~29% without slot attention, and by ~22% without a relational bottleneck. This demonstrates that all major elements of the model (object-centric representations, relational abstraction, higher-order relations) play an important role. | Distribution-of-3 | $m=0$ | $m=50$ | $m=85$ | $m=95$ | | ------- | ------ | ------- | ------- | ------- | | OCRA (ours) | **98.86 ± 0.2** | **97.87 ± 0.2** | **96.09 ± 0.4** | **86.42 ± 1.2** | | MAE | **99.99 ± 0.0** | 56.47 ± 1.1 | 40.90 ± 1.2 | 28.85 ± 0.9 | | Identity rules | $m=0$ | $m=50$ | $m=85$ | $m=95$ | | ------- | ------ | ------- | ------- | ------- | | OCRA (ours) | **99.01 ± 0.0** | **98.01 ± 0.1** | **96.67 ± 0.2** | **92.80 ± 0.3** | | MAE | 66.55 ± 0.2 | 44.96 ± 0.6 | 38.39 ± 0.9 | 31.56 ± 1.0 | [1] He et al. (2022). Masked autoencoders are scalable vision learners. --- Rebuttal Comment 1.1: Title: Thanks Comment: Thanks to the authors for providing detailed responses to my questions and adding the new experiments, which support the effectiveness of the slot attention. The rebuttal has addressed most of my concerns. I have also read other reviews and increased my rating to weak accept.
Rebuttal 1: Rebuttal: We would like to thank the reviewers for their many insightful suggestions and comments. We have added a number of new experiments to address the concerns raised, and revised the paper to improve clarity and provide further elaboration on some issues. We believe the paper is significantly improved as a result of these changes. Here, we will address the major issues raised by the reviewers, but we also provide point-by-point replies to each reviewers’ comments below. - More systematic evaluation of baselines: We thank the reviewers for raising this issue, and have now included results for all baselines on all tasks, including six new baseline results for CLEVR-ART and SVRT, and one new baseline result for ART (see figures 1-3 in the attached PDF). These results include the implementation of a new baseline model ‘slot-corelnet’, that combined our pre-trained slot attention module with the corelnet [1] architecture. **We found that OCRA performed better than the baseline models in most settings, with the best average performance across all tasks, and especially strong performance in the most extreme generalization regimes.** Please note that we were not able to evaluate the ‘Attention-ResNet’ baseline on other tasks besides SVRT, as the SVRT result was reproduced from an earlier paper [2], for which no source code was provided. - Lack of source code in our original submission: **We have now provided the AC with an anonymized link to the source code for our model. We will make this code publicly available upon acceptance of the paper. We will also make the CLEVR-ART dataset publicly available upon acceptance.** - Concern about simplicity of benchmarks: The primary focus of the benchmarks we consider is **out-of-distribution generalization**. In the case of ART, performance of the baselines in the most extreme generalization regime ($m=95$ in Figure 1 of the attached PDF) is far from ceiling, with most performing at or near chance, whereas OCRA achieves a score of 88%. Performance of the baselines on the CLEVR-ART task (which also involves a significant degree of OOD generalization) and SVRT same-different tasks is similarly poor whereas OCRA achieves a score of 85% on both. Thus, despite the relative visual simplicity of the tasks that we consider, they are nevertheless an extremely challenging test of generalization and abstraction for baselines, whereas OCRA performs well. - Importance of extension to real-world visual inputs: We do however agree that it is important to extend models of visual reasoning so that they are capable of processing real-world visual inputs, and indeed that goal is precisely what motivated the present work. Toward that end, we focused in this work on extending abstract visual reasoning methods to deal with multi-object visual inputs. This was also the motivation for developing the CLEVR-ART dataset, which combines OOD generalization and photorealistically rendered three-dimensional objects. In future work, we agree that it would be extremely valuable to take this even further, by evaluating models on OOD generalization in the context of real-world images (e.g. on the Bongard-HOI benchmark [3], see next comment for a discussion of the prospects for doing this). - Lack of detail on differences between current benchmarks and real-world vision: The primary challenge that we envision in extending this work to more complex settings is the difficulty involved in extracting object-centric representations from real-world images. This is a challenging problem because real-world images contain less consistent segmentation cues than in synthetic tasks like CLEVR. However, there has recently been significant progress in this direction [4], and it should be possible to combine these advances with our proposed model, which is an important goal for future work. Another potential challenge is that real-world scenes often involve a large number of objects, which would yield an exponentially larger number of pairwise relations in our model. We believe that this could be addressed by replacing the transformer component with methods that are better designed for higher-dimensional inputs, e.g. [5]. Thus, although there are certainly challenges to be addressed, we do not envision any fundamental obstacles to further scaling of the proposed approach. We have added these clarifications and additional discussion points to the revised manuscript. [1] Kerg et al. (2022). On neural architecture inductive biases for relational tasks. [2] Vaishnav & Serre (2023). GAMR: A guided attention model for (visual) reasoning. [3] Jiang et al. (2022). Bongard-HOI: Benchmarking few-shot visual reasoning for human-object interactions. [4] Seitzer et al. (2022). Bridging the gap to real-world object-centric learning. [5] Jaegle et al. (2021). Perceiver: General perception with iterative attention. Pdf: /pdf/360f5fdf61885151282ce7355497f27695d133a8.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Exploring Loss Functions for Time-based Training Strategy in Spiking Neural Networks
Accept (spotlight)
Summary: This paper focuses on the loss functions for time-based training schemes of SNNs, which propagate gradients only when the neurons fire a spike. The authors propose to map rate-based loss functions to time-based ones and explain why they also work. Besides, the authors propose the enhanced counting loss to replace the commonly used mean square counting loss. The experimental results show that the proposed method achieves SOTA performance among time-based methods. Strengths: 1. The training algorithm studied in this paper is relatively novel and fits the event-driven feature of SNN. It also provides a novel aspect to link temporal coding and rate coding in SNNs. 2. This paper provides a solid analysis of applying rate-coded losses in time-based training schemes. The proposed enhanced counting loss reasonably stabilizes the whole training process and thus improves performance. 3. This paper is relatively well-organized and well-written. Weaknesses: The training algorithm studied in this paper cannot achieve comparable performance compared with BPTT-based methods. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. Have the authors tried loss functions other than counting-based losses? For example, loss functions defined on spike timings as in [1] or time-to-first-spike based loss function as in [2]? 2. How is the simulation performed? Is it simulated in continuous time or discrete time steps? If it is simulated discretely, what time steps do you use in your experiment? [1]Bohte, S. M., Kok, J. N., & La Poutre, H. (2002). Error-backpropagation in temporally encoded networks of spiking neurons. Neurocomputing, 48(1-4), 17-37. [2]Mostafa, H. (2017). Supervised learning based on temporal coding in spiking neural networks. IEEE Transactions on neural networks and learning systems, 29(7), 3227-3235. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: There’s no potential negative societal impact that should be addressed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for acknowledging our research direction and the innovative approach we have employed. We are truly grateful for your recognition of the solid analysis, well-structured organization, and clear exposition in our work. In response to your valuable feedback, we are fully committed to addressing any concerns you have raised and providing comprehensive responses to the questions, as detailed in the following sections. > The training algorithm studied in this paper cannot achieve comparable performance compared with BPTT-based methods. We appreciate your attention to this matter. Our proposed method differs from BPTT-based approaches in two significant aspects. Firstly, we calculate gradients of spike timings generated by the loss function, whereas BPTT-based methods compute gradients of the 'spike scale' (as depicted in Figure 1(b)(d)). Secondly, our method operates in an event-driven manner, wherein information propagates solely through spikes during both forward and backward propagation. In contrast, BPTT-based techniques propagate gradient information even in the absence of emitted spikes, as exemplified by the surrogate gradient's approximation of $\frac{\partial s}{\partial u}$ regardless of spike occurrence. The event-driven property of our method poses a challenge during training compared to BPTT-based approaches due to the sparse gradient propagation path. Additionally, as our learning scheme is relatively novel when compared with BPTT-based methods, its performance and convergence speed have not yet surpassed those of the latter. However, the event-driven property empowers our learning scheme with enhanced biological plausibility and the potential for more efficient optimization when executed on neuromorphic hardware. > Have the authors tried loss functions other than counting-based losses? For example, loss functions defined on spike timings as in [1] or time-to-first-spike based loss function as in [2]? We have tried other loss functions, including these two loss functions. The corresponding results are listed in Section 5.1 in the supplementary material. We have conducted the experiments on the Fashion-MNIST dataset, and the results are listed in Table.3 in the appendix. From the results, cross-entropy losses (whether with respect to firing rate such as the temporal efficient training loss [3] or firing time such as the time to first spike loss(CE)) cannot achieve good results on this dataset. This is partly due to the abnormal ratio of positive gradients (Figure.1(d) and Figure.2(d) in the appendix). The ratio is easily to be too large. For the spike train (kernel) and spike train (timing) loss, they behave normally in the Fashion-MNIST dataset. However, when extending to larger datasets like CIFAR-10, the spike train (timing) loss cannot achieve a good result (90.09\% compared with 93.54\% for the enhanced counting loss). Therefore, currently the enhanced counting loss is the best one under our attempt among different losses for the time-based SNN training scheme. [1]Bohte, S. M., Kok, J. N., & La Poutre, H. (2002). Error-backpropagation in temporally encoded networks of spiking neurons. Neurocomputing, 48(1-4), 17-37. [2]Mostafa, H. (2017). Supervised learning based on temporal coding in spiking neural networks. IEEE Transactions on neural networks and learning systems, 29(7), 3227-3235. [3]Deng, S., Li, Y., Zhang, S., & Gu, S. (2021, October). Temporal Efficient Training of Spiking Neural Network via Gradient Re-weighting. In International Conference on Learning Representations. > How is the simulation performed? Is it simulated in continuous time or discrete time steps? If it is simulated discretely, what time steps do you use in your experiment? We would like to emphasize that our training approach does not require splitting the entire process into discrete time steps. As shown in Figure 1, we only need to record information at spike times (such as the precise timing and slope of the membrane potential) to train our network, and there is no inherent need to use clock-driven methods. However, to better integrate with current deep learning frameworks, we convert the training process from continuous time to discrete time steps in our simulation. For MNIST and Fashion-MNIST, we set the number of time steps to 5, for CIFAR10 we set it to 12, for CIFAR100 we set it to 16, and for N-MNIST we set it to 30 (these parameters are consistent with [4]). For further information about hyper-parameters, please refer to Table S1 in the appendix. [4] Zhu, Y., Yu, Z., Fang, W., Xie, X., Huang, T., & Masquelier, T. (2022). Training Spiking Neural Networks with Event-driven Backpropagation. In 36th Conference on Neural Information Processing Systems (NeurIPS 2022). --- Rebuttal Comment 1.1: Title: Thanks for the response Comment: As my concerns are well-addressed. I would like to raise my score to 7.
Summary: This paper explores the loss function for SNNs and proves that rate-based loss functions can be also used in the time-based training scheme. Besides, the authors propose a new loss function called enhanced counting loss, which improves the network performance (compared with previously-used mean square counting loss) by providing adequate positive overall gradients for time-based training schemes in SNN training. In addition, they propose a new normalization approach, which helps the training process by tuning the threshold instead of standardizing the weights. In summary, this paper brings some interesting ideas to the community of SNNs. Strengths: 1. This paper focuses on time-based learning of SNNs, which is a very important direction of SNNs as it can better utilize the temporal information of SNNs. There is little work at present. The authors propose some interesting ideas (loss function, normalization approach), which is very impressive. 2. The proposed framework can explain why the derivative of integration of spikes with respect to the firing time is set to -1 in many previous studies. 3. The paper is well-written and includes rigorous proofs. 4. The proposed approach outperforms previous time-based training methods. Weaknesses: I think the description from Eqs.(1)-(5) is hard for readers unfamiliar with time-based learning to understand. They need to check the reference [58]. I suggest adding the detailed derivation in Appendix. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. Despite better utilizing the temporal information of SNNs, what are the advantages of time-based learning methods compared to activation-based approaches? Could the authors give some comments? 2. I would like to see some comments on the time-based learning of SNNs on neuromorphic chips. Is it more energy efficient? 3. The loss function (13) seems not well-defined when no spikes are emitted (the value in the ln() function is 0). I would like to see some explanations to cover this corner case. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: N.A. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for providing such an encouraging and positive response. We sincerely appreciate your recognition of the significance of our work, its alignment with an important direction, its well-written nature, and the inclusion of rigorous proofs. In light of your feedback, we are fully dedicated to addressing any concerns you have raised and offering comprehensive responses to the questions, as delineated in the subsequent sections. > I think the description from Eqs.(1)-(5) is hard for readers unfamiliar with time-based learning to understand. They need to check the reference [58]. I suggest adding the detailed derivation in Appendix. Thank you for your valuable suggestion. We will add the corresponding derivation as well as an algorithm description to our paper (please refer to the response to Reviewer LeWP for the algorithm description). > Despite better utilizing the temporal information of SNNs, what are the advantages of time-based learning methods compared to activation-based approaches? Could the authors give some comments? The key point is that our approach is event-driven. One can compare Figure.1(a)(b) and (c)(d) to understand the difference between the time-based training scheme and the activation-based training scheme: In Figure.1(a)(b), the forward and backward propagation of activation-based training schemes cannot be simulated in continuous time, since the surrogate gradient in the backward stage is also propagated through neurons that are not fired. In Figure.1(c)(d), the simulation of time-based training schemes can operate in continuous time, since in both forward and backward stage the information propagation between neurons are conveyed by spikes. This event-driven property guarantees the possibility of network simulation in continuous time, has the potential to save energy on neuromorphic chips, and provides more biological plausibility. > I would like to see some comments on the time-based learning of SNNs on neuromorphic chips. Is it more energy efficient? Our learning scheme has the potential to exhibit enhanced energy efficiency when executed on neuromorphic hardware. To highlight the hardware efficiency advantage, we conduct a comparison of the number of operations between our method and Backpropagation Through Time (BPTT) algorithms when dealing with sparse spikes. Although we analyze a single fully-connected layer for simplicity, the analysis extends similarly to other types of layers and the entire network. During training, BPTT algorithms necessitate unfolding through the time axis, as depicted in Figure 1(a)(b). Consequently, this leads to a corresponding number of operations of at least $O(TC_{in}C_{out})$, where $T$ denotes the total time steps, and $C_{in}$ and $C_{out}$ represent the number of input and output neurons, respectively. In contrast, temporal learning algorithms, such as ours, only need to address scenarios where a specific neuron fires a spike and records pertinent information. During the forward stage, a spike fired by an input neuron influences the state of both itself and all output neurons, culminating in a total of $O(C_{out})$ operations. During the backward stage, a spike discharged by an output neuron requires propagating gradient information to all intervening spikes between this spike and the last spike fired by the same neuron (thus, all input spikes are processed once during this stage). Therefore, by denoting the average firing rate of input and output neurons as $\alpha$ and $\beta$, the number of operations for this layer is determined as $O(T(\alpha C_{in}C_{out}+\beta C_{in}))$. In cases of sparse spikes, event-based learning algorithms exhibit an advantage, as $\alpha+\beta \ll 1$ under such circumstances. > The loss function (13) seems not well-defined when no spikes are emitted (the value in the ln() function is 0). I would like to see some explanations to cover this corner case. Thank you for pointing it out. The integral $\int_0^T s_i(t) dt$ must be an integer since it represents the number of spikes fired by neuron $i$ during simulation time ($0$ to $T$), so only the derivation at integer point is meaningful. As a result, we can let $f(x) = x - \text{target}_i \ln(x)$ when $x>0$ and $f(x) = 0$ when $x=0$ to fix this bug. Then the loss function is $$L( \boldsymbol{s}, \boldsymbol{target} ) = 2\lambda \sum\_{i=1}^C f \left( \int_0^T s_i(t) dt \right).$$ --- Rebuttal Comment 1.1: Comment: I have read the point-to-point responses to my questions, which can address my concerns.
Summary: In this work, the authors propose a framework that applies the rate-coding-based loss functions to time-based training. They show that the proposed method outperforms existing time-based ones. Strengths: 1. The proposed method suggests a way to combine different approaches to train spiking neural networks. 2. It provides detailed derivation and a clear explanation of the pipeline. Weaknesses: 1. Overall, the performance of time-based models suffers a deficit compared to other SNN methods. What kind of evaluable benefit does this kind of training bring to the current paper? 2. What's the difference between encoding the error time w.r.t to the membrane potential and the spike time? 3. It would be better to have an Algorithm description in the main context. 4. It lacks a clear demonstration of what kind of new information/feature is captured so that the overall performance is improved. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: Please check the weakness. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: NA Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We express our sincere gratitude for providing constructive and insightful feedback such as the addition of the algorithm description. It is truly gratifying to know that you value the meticulous derivation and lucid explanation of our pipeline. In light of your input, we are fully dedicated to addressing your concerns as outlined in the forthcoming sections. > Overall, the performance of time-based models suffers a deficit compared to other SNN methods. What kind of evaluable benefit does this kind of training bring to the current paper? The primary objective of our research is to develop a fully biologically plausible learning scheme akin to STDP. These approaches possess key properties, including being event-driven, online, and local. While more biologically plausible schemes may incur performance trade-offs, they hold significant value in explaining learning rules observed in biological brains. Our work focuses on event-driven property. One can gain insight into the distinction between time-based and activation-based training schemes by comparing Figure 1(a)(b) and (c)(d): In Figure 1(a)(b), the forward and backward propagation of activation-based training schemes cannot be simulated in continuous time, as the surrogate gradient in the backward stage is also propagated through neurons that have not fired. Meanwhile, in Figure 1(c)(d), the simulation of time-based training schemes can indeed operate in continuous time, as both the forward and backward stages entail information propagation between neurons conveyed by spikes. Apart from conferring more biological plausibility, this event-driven property ensures the feasibility of network simulation in continuous time and holds the potential to conserve energy when executed on neuromorphic chips. > What's the difference between encoding the error time w.r.t to the membrane potential and the spike time? The membrane potential and spike time are not against each other since they pertain to distinct stages in SNN time-based training. You can refer to Figure.1: Suppose there are $N$ layers in total, then in time-based training, the gradient propagation path is loss -> (spike timing in layer $N$ -> membrane potential in layer $N$) -> (spike timing in layer $N-1$ -> membrane potential in layer $N-1$) -> ... -> (spike timing in layer $1$ -> membrane potential in layer $1$), and there is an additional path from membrane potential in layer $n$ to weight between layer $n$ and $n-1$. > It would be better to have an Algorithm description in the main context. Although there exists a continuous version of our algorithm, we implement the discrete version since it aligns more seamlessly with existing deep learning frameworks, rendering it easier to implement and enabling better utilization of CUDA acceleration. The following is the discrete version of our algorithm and will be added to our paper: $$\begin{array}{l} \text{Input: Input spike train $\boldsymbol s^{(0)}$, weight for all layers $\boldsymbol W^{(n)}(n = 1, ..., N)$, total simulation time }T\\\\ \text{Output:}\nabla\boldsymbol W^{(n)}(n=1,...,N)\\\\ \text{//Forward}\\\\ \text{Initialize$\boldsymbol{Neuron}^{(n)}(n=1,...,N)$: Neurons with states for all layers}\\\\ \text{for $n=0$ to $N-1$ do}\\\\ \qquad\text{for $t=0$ to $T-1$ do}\\\\ \qquad\qquad\text{Update state of $\boldsymbol{Neuron}^{(n+1)}$by Eq.(1)}\\\\ \qquad\qquad\text{if Membrane potential $\boldsymbol u^{(n+1)}\geq\theta$ then}\\\\ \qquad\qquad\qquad\boldsymbol s^{(n+1)}=1\\\\ \qquad\qquad\qquad\text{Reset state of }\boldsymbol{Neuron}^{(n+1)}\\\\ \qquad\qquad\text{end if}\\\\ \qquad\text{end for}\\\\ \text{end for}\\\\ \text{//Backward}\\\\ \text{Initialize$\nabla\boldsymbol W^{(n)},\frac{\partial L}{\partial\boldsymbol t(s)^{n}}\gets0$for }n=1,2,...,N\\\\ \text{Calculate$\frac{\partial L}{\partial\boldsymbol t(s)^{N}}$by Eq.(13)}\\\\ \text{for $n=N$ to 1 do}\\\\ \qquad\text{for $t=T-1$ to 0 do}\\\\ \qquad\qquad\text{if $\boldsymbol s^{(n)}(t)=1$ then}\\\\ \qquad\qquad\qquad\text{Change$\frac{\partial L}{\partial\boldsymbol u^{(n)}}$to$\frac{\partial L}{\partial \boldsymbol t(s^{(n)})}\frac{\partial\boldsymbol t(s^{(n)})}{\partial\boldsymbol u^{(n)}(t)}$by part of Eq.(3)(4)}\\\\ \qquad\qquad\text{end if}\\\\ \qquad\qquad\text{Add$\frac{\partial L}{\partial\boldsymbol u^{(n)}}\frac{\partial\boldsymbol u^{(n)}}{\partial\boldsymbol W^{(n)}}$to$\nabla\boldsymbol W^{(n)}$by Eq.(3)}\\\\ \qquad\qquad\text{Add$\frac{\partial L}{\partial\boldsymbol u^{(n)}}\frac{\partial\boldsymbol u^{(n)}}{\partial\boldsymbol t(s^{(n-1)})}$to$\frac{\partial L}{\partial \boldsymbol t(s^{(n-1)})}$by Eq.(4)}\\\\ \qquad\text{end for}\\\\ \text{end for}\\\\ \end{array}$$ > It lacks a clear demonstration of what kind of new information/feature is captured so that the overall performance is improved. In section 4.3, we have analyzed the deficiency of counting loss: Suppose an output neuron corresponding to the label emits $m$ spikes, then the total gradients on these spikes are $m (\text{target}\_{label} - m)$ (Eq(12)). This is a concave quadratic function, which means firing one spike will receive a total gradient equal to firing $\text{target}\_{label} - 1$ spikes and less than firing $2$ to $\text{target}\_{label} - 2$ spikes. Recall that the total time-based gradient means the cumulative pushing of all spikes for how much earlier, remains constant across layers (Eq(6)). A quadratic function is not reasonable under this concern. Our proposed enhanced counting loss converts this function $m (\text{target}\_{label} - m)$ to a decreasing linear function $\text{target}\_{label} - m$, which is more reasonable regarding the total gradient. Therefore in experiments, we show the total gradient scale (both original and after absolute function), and the ratio of positive gradients w.r.t the training accuracy. Results show that our proposed method decreases the total gradient scale and makes the ratio of positive gradients more balanced. --- Rebuttal Comment 1.1: Title: Response to the rebuttal Comment: Thanks for the rebuttal. It addresses my questions. I think my current score is suitable.
Summary: This paper focuses on the time-based training approach for spiking neural networks (SNNs). It first explains why rate-based loss functions can be used in time-based training for SNNs by establishing a link between rate-based losses and time-based ones. Then it does some analysis on the overall gradient provided by the loss function and proposes the enhanced counting loss based on this analysis. At last, it proposes a new normalization method that changes the threshold instead of the weight scale. Experimental results on classification tasks show improved performance Strengths: 1. Exploring event-driven learning for SNNs is important. 2. This paper is technically solid in analyzing various aspects of rate-based loss functions. 3. The proposed enhanced counting loss and threshold training are shown to be effective by the experiments. Weaknesses: This paper has a relatively narrow scope, focusing solely on improving counting-based loss functions for time-based training schemas. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. Is section 4.1 and 4.2 related to the proposed methods in this paper? I am somewhat confused about the role of this part in this paper, and hard to follow its conclusion. 2. Authors do not list BPTT-based methods in experiments. They say BPTT-based methods incorporate information when neurons do not fire in training, which improves the final performance. Does it mean BPTT-based methods have a better performance than time-based methods? 3. Is time-based learning of SNN more suitable for neuromorphic chips than activation-based learning? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: See above Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely appreciate your constructive and insightful feedback. It is truly heartening to learn that you acknowledge the sound analysis presented in our paper and find merit in our newly proposed loss function, which contributes to the stabilization of the entire training process. We are committed to addressing your concerns and providing comprehensive responses to the questions you have raised, as outlined in the following sections. > This paper has a relatively narrow scope, focusing solely on improving counting-based loss functions for time-based training schemas. Thank you for your valuable feedback. We wish to underscore that, alongside the counting-based loss function tailored for the time-based training schema, we have also introduced a mechanism to incorporate rate-based loss functions into the temporal training approach. Moreover, the potential to apply time-based loss functions to rate-based training schemes in a reverse manner offers a compelling avenue for future exploration. Furthermore, our method proposing the transfer of training scale factors utilized by weight standardization into thresholds is not restricted to specific SNN training algorithms. We believe that our work provides a foundation for future research in this area, and we genuinely appreciate your keen interest in our work. > Is section 4.1 and 4.2 related to the proposed methods in this paper? I am somewhat confused about the role of this part in this paper, and hard to follow its conclusion. Thank you for raising this point. Sections 4.1 and 4.2 of our paper furnish a theoretical basis for applying rate-based losses in temporal SNN training. The analysis presented in these sections elucidates the underlying principles behind both the counting loss and our proposed enhanced counting loss and elucidates why they are effective in our approach. We will clarify it in the revised version. > Is time-based learning of SNN more suitable for neuromorphic chips than activation-based learning? Yes. The event-driven nature of our learning scheme contributes to its increased biological plausibility and suitability for continuous-time learning while also exhibiting enhanced efficiency when executed on neuromorphic hardware. To highlight the hardware efficiency advantage, we conduct a comparison of the number of operations between our method and Backpropagation Through Time (BPTT) algorithms when dealing with sparse spikes. Although we analyze a single fully-connected layer for simplicity, the analysis extends similarly to other types of layers and the entire network. During training, BPTT algorithms necessitate unfolding through the time axis, as depicted in Figure 1(a)(b). Consequently, this leads to a corresponding number of operations of at least $O(TC_{in}C_{out})$, where $T$ denotes the total time steps, and $C_{in}$ and $C_{out}$ represent the number of input and output neurons, respectively. In contrast, temporal learning algorithms, such as ours, only need to address scenarios where a specific neuron fires a spike and records pertinent information. During the forward stage, a spike fired by an input neuron influences the state of both itself and all output neurons, culminating in a total of $O(C_{out})$ operations. During the backward stage, a spike discharged by an output neuron requires propagating gradient information to all intervening spikes between this spike and the last spike fired by the same neuron (thus, all input spikes are processed once during this stage). Therefore, by denoting the average firing rate of input and output neurons as $\alpha$ and $\beta$, the number of operations for this layer is determined as $O(T(\alpha C_{in}C_{out}+\beta C_{in}))$. In the case of sparse spikes, event-based learning algorithms exhibit an advantage, as $\alpha+\beta \ll 1$ under such circumstances. > Authors do not list BPTT-based methods in experiments. They say BPTT-based methods incorporate information when neurons do not fire in training, which improves the final performance. Does it mean BPTT-based methods have a better performance than time-based methods? We would like to point out that the performance of time-based training methods has yet to match that of surrogate gradient methods, attributed to the following reasons: Firstly, our event-driven approach restricts information propagation solely through spikes during both forward and backward propagation, as depicted in Figure 1d. In contrast, surrogate gradient techniques propagate gradient information even when no spikes are emitted, as exemplified in Figure 1b, where the surrogate gradient approximates $\frac{\partial s}{\partial u}$ regardless of spike occurrence. This event-driven property renders our method more challenging to train in comparison to surrogate gradient techniques, primarily due to the sparse gradient propagation path. Furthermore, our learning scheme is relatively novel in contrast to surrogate gradient methods. Conversely, the time-based training method boasts certain advantages over surrogate gradient methods. Specifically, our event-driven learning scheme exhibits enhanced biological plausibility, proving more amenable to continuous-time learning, and significantly more efficient when executed on neuromorphic hardware. The in-depth rationale is elucidated in the preceding question.
null
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Efficient Subgame Refinement for Extensive-form Games
Accept (poster)
Summary: The paper proposes GS2, a method to overcome the problem of large information states in subgame solving in imperfect-information games. Theoretical results and experimental evaluation on GuanDan are presented, with impressive results. Strengths: The main idea of the paper, to use a diversity function to filter the set of states, is nice. The practical results are particularly impressive: experiments on medium-sized games seem to clearly indicate that the diversity function has a positive impact, and the experiments on GuanDan seem to suggest that the bot is state-of-the-art. Weaknesses: I was a reviewer on an earlier version of this paper, during which time I had a robust discussion with the paper authors. The present submission is much improved and has addressed most of the concerns I had with the previous version. I have only a few lingering questions and minor concerns from that discussion, mostly concerning the experiments; see below. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. (L522) "Although standard GS2 need to generate the full second-order knowledge infosets, we only consider the nodes in the first-order knowledge infoset" Does this imply that, in the subgame, the opponent essentially has complete information about the history at the root of the subgame? * If so, this seems like a fairly strong simplification, and I am fairly surprised at the strong practical performance. Why do you believe that GuanDan has such a structure that this simplification does not completely destroy performance? * If not, I assume you will have to incorporate second-order nodes *somewhere*. How, precisely is that done? 1. (Sec C.3) What depth and branching factor limits are used in the GuanDan experiments? 1. In the tabular experiments, it appears that the performance of GS2-R and GS2-D get closer to each other as $k$ increases. I would be interested to see experiments with slightly larger $k$, perhaps $k=5$ or $k=10$, to see if this effect continues. Along these lines, how computationally expensive is it in GuanDan to use a larger $k$? That is, if you were to set $k=10$ (say) in GuanDan, would the time taken in subgame solving be then too long to use in realtime? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your appreciation and valuable comments/suggestions for our works. We hope these response would address the lingering concerns. * **Regarding Question 1**. Thank you for your insights. In GuanDan, we indeed operate under the assumption that opponents possess complete information. This stems from a sampling across the opponent's infoset, functioning as a specialized sampling approach. The rationale is based on the observation that, as GuanDan progresses, actions reduce game uncertainty. Hence, adept players often anticipate other players' cards -- a crucial skill for proficient players, especially as the endgame approaches. It's worth noting that while this simplification might yield a more conservative strategy at the beginning, the subgame is solved in a depth-limited manner. Consequently, leaf node values are derived from a blueprint devoid of complete information. This ensures that opponents cannot excessively capitalize on having complete information, thus mitigating the potential negative impact of our simplification. * **Regarding Question 2**. Thank you for the question. \- Branch Factor: We set the branch factor to 2 for each action type. Actions in GuanDan are divided into several types, as detailed in the "Card Type" section of Appendix B. For each infoset, the player is allowed to choose from all types in its hand if no one plays any cards after the player's last action. Otherwise, only the same card types, "Pass (not playing any cards)" or "Bomb (a special cards type)" can be played. Thus the numbers of actions after pruning may not be the same for each infoset. \- Depth Limit: The default depth limit is set at 10, inclusive of the teammate's node which possesses only a single legal action. However, if the combined card count for all opponents' hands exceeds 27, the depth limit is reduced to 8. * **Regarding Question 3**. Thank you for your insightful question. As $k$ grows larger, the sets generated by GS2-R are more likely to be diverse in small infoset, thus yielding similar performance outcomes when contrasted with those generated by GS2-D. As you suggested, we conduct experiments with $k=8$ and $k=10$ in Goofspiel. The results for these additional experiments can be found in Section 2 of the global response PDF. The results shows the performance of GS2-D and GS2-R would getting closer as $k$ increasing. It also shows that the exploitability of the resulting strategy will increase due to the time limit, showcasing the necessity of reducing subgame size. Regarding computational complexity in GuanDan: it is indeed a primary concern. In our experiments, we employed two Intel(R) Xeon(R) Gold 6242R CPUs @ 3.10GHz, using the multi-process version of MCCFR. The 30s time limit for GuanDan is greater than that of Texas Holdem, allowing for additional iterations. With these specifications, GS2 is efficient enough for k=10. However, for substantially larger $k$ values, like 40 or 50, convergence becomes problematic in scenarios with expansive branching. Specifically, we noticed minimal improvement against the two baselines when $k=40$ or $50$, compared to the performance when $k=30$. We attribute this to the linear growth in the number of opponent infosets as $k$ increases, causing the strategy to stall before surpassing the performance of the smaller $k$ --- Rebuttal Comment 1.1: Comment: Q1: I think this should be explicitly stated, even perhaps in the body. It greatly changes the interpretation of what the algorithm is doing---assuming the opponent knows your private cards, even temporarily, can greatly change how the subgame solver behaves. Q2: Thank you. I suggest including these in the paper itself. Q3: Thank you. One more follow-up here: your Goofspiel experiments suggest that, at least in the game on the left, GS2-R and GS2-D have about the same performance when k=10. You also say that k=10 is computationally feasible in GuanDan. I'd like to see an ablation running GS2-R in GuanDan, with various k-values, epsecially larger ones---that is, some plot like your global response plots within GuanDan. My purpose here is to understand what exactly in practice is the effect of the diversity function. I understand, however, if this is not possible with the time constraint of the discussion phase. In any case, my opinion of the paper has not significantly changed, and I keep my score. --- Reply to Comment 1.1.1: Title: Thanks for your suggestions Comment: Thank you for your advise. We will make sure to incorporate the details in the paper. Regarding the GS2-D in GuanDan. We are grateful to your understanding regarding the time constraints that prevent from completing the new additional experiments. We will manage to present some preliminary experiment results before the discussion deadline.
Summary: The paper proposes a novel subgame resolving algorithm in Extensive-Form games called GS2, which will only sample a portion of subgames and dramatically reduce computation complexity. Strengths: - The new subgame resolving algorithm GS2 has a theoretical guarantee, as all previous work did. - The paper has sufficient experiments and also gets great performance in the Guandan game. Weaknesses: No code was released. Since subgame refinement is really an engineering topic, if possible, I think releasing the code will greatly benefit the whole community to do improvements in the future. Otherwise it will be really hard to reproduce the work since there are too many details in implementing algorithms. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: Listed in weaknesses. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: Yes the authors have addressed the limitations adequately. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate your recognition of the merits of our work. In response to your concern, we are in the process of refining our code to ensure its clarity and ease of use. Once finalized, we commit to making it publicly accessible. --- Rebuttal Comment 1.1: Title: Re: Rebuttal by Authors Comment: Looking forward to your code!
Summary: The contributions of the paper are twofold: - On the theoretical hand, the paper introduces a bound on the total increase in exploitability when refining the strategy in a subgame that consider only some of the possible infosets of the adversary in a stochastic way. - This bound is based on considering that the exploitability of the refined strategy may increase (in the worst case) due to updating in the worst possible way the strategy for some of the opponent's infosets ($\delta$ term in the bound). This extra exploitability is then weighted for the probability of not sampling such an adversary infoset ($1-\omega$) term and for the probability of having reached that infoset in game ($\pi$ term) - This theoretical bound is then used to design of the *Generative Subgame Solving (GS2)* algorithm. GS2 samples some of the nodes $h$ in the current infoset of the refining player. To sample the different $h$ that will belong to the subgame that will be refined, a *diversity-based generation function* is used. This function is an heuristics that selects the histories such that the distribution of counterfactual values at the infosets of the opponent at the root of the GS2 subgame are representative of the distribution that there would have been in the 1-KLSS. This allows to avoid to refine the strategy considering too pessimistic or too optimistic "cuts" of the original subgame, even if there is no formal proof that this reduces exploitability Strengths: - Interesting and novel technical approach, which allows to interpolate between the unsafe solving from Ganzfried and Sandholm to the safe approach of 1-KLSS - Formalism used is in line with previous works in the field - Both medium and larger scale experiments. This allows to verify both the correctness and the scalability of the proposed approach - clear structure of the paper and clear descriptions of the adopted solutions - an example of how the technique would be applied on a game is presented, further clarifying the concepts Weaknesses: - Some of the claims presented by the paper are misleading or poorly justified: - The bound presented in Proposition 4.1 bounds the possible increase in exploitability from the blueprint to the refined strategy as the worst case scenario in which the refined strategy is maximally losing in case the opponent decides to switch to play only to that infoset. - Lines 235-240 claim that GS2 is more suitable for situations in which $\delta$ is already low, differently to traditional unsafe solving techniques. This means that the blueprint is of low quality, and therefore GS2 cannot make things too much worse in terms of exploitability wrt the original (already bad) blueprint. I don't get why such an argument would not apply as it to any other subgame solving technique - Line 290-291: "[GS2] also refines the blueprint by selectively focusing on the most relevant portions of the game tree". The introduced diversity generated function only considers the adversary's infosets as counterfactual values. There is no clear connection to how the sampled histories should be the **most relevant** from a strategy refinement perspective - The large scale experimental setting presented feels disconnected from the techniques presented by the paper: - As indicated in Appendix C.2: *Although standard GS2 need to generate the full second-order knowledge infosets, we only consider the nodes in the first-order knowledge infoset.* I interpreted this as the fact that the set of histories $\{h \in S_{top}: \exists I_2: h \in I_2 \land \exists h' \in I_1, I_2\}$ is not added to the constructed subgame. My opinion is that this approximation is really important to the point that the resulting algorithm should be clearly distinguished from GS2 as presented in the previous parts of the paper. This because while GS2 is a "partial cut" version of 1-KLSS, not adding all nodes in $I_2$ makes the technique possibly much more unsafe, and more similar to a "partial cut" version of the unsafe abstraction techniques from Ganzfried and Sandholm. - GS2 is an unsafe solving technique: no cases in which such an unsafeness becomes evident are presented, leaving open the question of when such a technique may fail Technical Quality: 2 fair Clarity: 3 good Questions for Authors: Other then asking the authors to share their view/clarify the weakness I listed in the previous section, I add the following questions to clarify specific points of the paper: - KLSS's explanation (Line 189-192) is unclear. I suggest just keep formal definition which is fine (and not the partially repeated phase at line 189). Also the $k$-th order knowledge limited subgame starts from the *infoset $I$* of the current player and not from the *current history $h$* as indicated. - I'd like to ask for a confirmation: is $\pi^{\sigma'_1}_{-2}(\mathcal I_2(h)) = \pi^{\sigma_1}_{-2}(\mathcal I_2(h)) \ \forall h \in S_{top}$? Minor typos: - Line 255: is there any connection that was intended between controlling the $k$ parameter and the possible use of abstraction techniques? In my understanding, they are two techniques that can be applied independently. If this is the case, "Therfore" should me substituted with "Moreover" or a synonym thereof. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 2 fair Presentation: 3 good Contribution: 3 good Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your insightful comments. We sincerely appreciate your valuable feedback. Here are our responses. - **Regarding the exploitability bound.** Thank you for pointing out this aspect. Indeed, the bound presented is specific to the infoset. The reason for this is rooted in the fact that GS2 is constructed upon the KLSS framework. Consequently, obtaining an exploitability bound is not as straightforward as with conventional subgame solving techniques. We recognize that the proposition can be extended to strategies where GS2 is applied to disjoint subgames; in such cases, the additional term would vary based on the maximum across these subgames. We will ensure that this clarification is included in the revision of the paper. - **Regarding L235-240.** Thank you for raising this concern. In many unsafe subgame solving techniques, like endgame solving, the assumption is that the opponent blueprint is close to equilibrium; deviations from this can hinder strategy refinement. Thus, the coefficient $1-\omega$ in the exploitation bound will increase up to 1. That's why we say GS2 is more suitable than unsafe solving. - **Regarding Line 290-291** Thanks for the great point. Our use of the term "relevant" was imprecise. What we intended to convey is that the GS2 framework focuses on portions of the game tree that are "typical" or representative based on the adversary's infosets as counterfactual values. We appreciate the clarification and will correct this in the revision to more accurately reflect our approach. - **Regarding the implementation in Appendix C.** We appreciate the observation. The approximation is implemented based on the concept of not just sampling from opponent infosets but also within the infoset itself to manage complexity. This approach is like applying a specific generation function to the opponent's infoset. It leads to a conservative strategy refinement by assuming opponents are aware of the player's exact private information. Such an assumption prevents a drastic rise in exploitability at the present infoset. This method is influenced by patterns observed in the game of GuanDan. In this game, as actions unfold, uncertainty diminishes, thus human players are expected to guess the others' cards especially near the end of the game, which is crucial for expert players. Therefore, considering the worst case that the opponents knows the player's cards would be practical. Even though this idea seems somewhat domain-specific, we believe it is also applicable to similiar games. - **Regarding the unsafeness.** GS2's safety is contingent upon the quality of the generation function used. Specifically, if the generation function were to primarily target situations where opponents face challenges, the refinement could become overly optimistic, making it susceptible to exploitation. - **Regarding Question 1**. Thanks for your suggestions. We will correct it in the later version. - **Regarding Question 2**. If $\sigma_1$ and $\sigma_1'$ are the same outside the subgame $S$, the answer is yes. - **Regarding the typos.** Thanks for the advise. We will make the corrections. --- Rebuttal Comment 1.1: Comment: Thanks for your answer, things are clearer now. On top on the minor revisions you already included I strongly suggest to include the discussion on appendix C raised from me and reviewer GWJJ in the main body, as this is crucial to get the whole picture. If I get confirmation of this, I'll raise my score to a 6 --- Reply to Comment 1.1.1: Comment: Thank you for your insightful suggestions. We hereby make our commitment to incorporating the comprehensive discussions concerning the implementations in appendix C and other minor revisions discussed before within the main body of revised paper.
Summary: The paper presents a generative subgame solving framework that can scale to games with a large amount of hidden information. One of the key ideas behind the generative framework is to prioritize exploration based on diversity. The paper evaluates on small-sized tabular games and a large poker-like game called GuanDan. Strengths: Overall, I have a positive opinion of the paper, with a few reservations (see below). In terms of strengths, I appreciated experiments on a real game. The structure and organization is also appropriate, with only minor clarity issues. While the method appears to be mostly a heuristic, it seems to be working pretty well in practice, and for that reason it seems worthy of discussion at the conference. Weaknesses: I find that the paper could be improved by expanding the discussion along the following directions: - I think more should be done to ablate the choice of diversity-based generating function. The method proposed fundamentally boils down to sampling possible compatible histories in the infoset by means of a "diversity-based generation function". Many choices of prioritization could be imagined, and it would be important to have some form of data points about what tends to work and what doesn't. Also, data supporting the need of using prioritization as opposed to not prioritizing as all would be nice to have. - The discussion around KLSS as introduced by Zhang and Sandholm (L37) seems to suggest that it is a safe method, as it is said "This approach enables the safe refinement of strategies". However, KLSS is typically used in an unsafe way in practice. I also think that the paper should discuss the following relevant related literature: - Approaches used in the game of Hanabi. While common-interest in nature, the techniques that have been proposed in Hanabi have an overlap with the material of the paper. For example, the work on Learned Belief Search should be discussed. - Approaches used in the game of Bridge, such as joint policy search, also seem relevant. Other comments: - I feel like calling the games of section 5.1 "medium" is rather generous. The games only have a few hundreds sequences per player and can, for example, be solved exactly using the simplex algorithm via the sequence-form. - L113, "one cannot assume rationality of the chance player". I found this obscure. In conclusion, while I find the underlying idea rather straightforward, I think the strengths outweigh the weaknesses despite the limited evaluation. However, I think the paper would substantially benefit from adding more data regarding the choice of prioritization functions, and expanding the discussion regarding related literature. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: Please see above. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 3 good Contribution: 2 fair Limitations: The authors adequately addressed the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your valuable and insightful comments. - **Regarding the choice of functions.** We appreciate your insight on the importance of thoroughly ablating the choice of our diversity-based generation function. In the current study, our choice for the diversity-based generation function was driven by both theoretical considerations and empirical performance in our preliminary experiments. However, we understand the importance of showcasing a more exhaustive comparison. To address this comment, we have included a more detailed comparative analysis in the supplementary material attached to our global response. The experiment is conducted in Leduc poker. In this evaluation, we employ all feasible deterministic generations, as opposed to a probability distribution, for each individual infoset right after the cards are dealt. The data points, located at $(x, y, z)$, correspond to instances of generated subgame with the opponent private cards set is $\{x,y,z\}$. The results show that the diversity generation function could perform well at all the player's infoset. - **Regarding the discussion of KLSS.** Thank you for the observation. To clarify, we will revise the statement to read: "This approach can enable safe strategy refinement under specific conditions of usage" in the updated version. - **Regarding the relevant related literature.** Thank you for highlighting the relevance of methods from the games of Hanabi and Bridge. In subsequent versions of our paper, we will expand upon this in the related work section: Search algorithms are also applied for obtaining better joint policy witin teammates in collaborative games such as Hanabi [1-2] and the bidding phases of contract Bridge [3]. For example, SPARTA [1] utilizes exact belief update for single agent search and multi-agent search and retrospective belief updates for multi-agent search to handle the large belief range. Subsequently, the strategy is improved through Monte Carlo rollouts. On the other hand, instead of maintaining an explicit belief distribution in single agent search, Learned Belief Search [2] method uses a learned belief model to sample states from the belief distribution, allowing for the application in games with large belief space. Rather than simply improve the strategy via rollouts, Joint Policy Search [3] method first decompose the global changes of the game value into local changes at each infoset, and then iteratively improve the strategy based on the decomposition.Although these approaches primarily focus on improving strategy in collaborative settings and could not be directly used in games with adversaries, the underlying idea could be helpful when developing new technique that conducts search within teammates in games like GuanDan. - **Regarding the word "medium".** Thank you for pointing out the potential mischaracterization. We utilized the term "medium" based on the terminology from KLSS [Zhang and Sandholm]. We will revise the term to "research games" or "simple games" in the updated version of our paper. - **Regarding the comments on L113.** Thank you for pointing out the ambiguity. In L113, when we mention "rationality of a player", we refer to the assumption made in some prior techniques (e.g., the iteratively deepening method used in KLSS) that the opponent will avoid empirically suboptimal actions. This allows one to prune parts of the game tree based on this assumption. However, since a chance player operates under a fixed probability distribution, this assumption of rationality is not applicable.Thus, using the iteratively deepening approach in this context would be unsound. We will elaborate on this distinction in the revised version for clarity. [1] Lerer A, Hu H, Foerster J, et al. Improving policies via search in cooperative partially observable games[C]//Proceedings of the AAAI Conference on Artificial Intelligence. 2020, 34(05): 7187-7194. [2] Hu H, Lerer A, Brown N, et al. Learned belief search: Efficiently improving policies in partially observable settings[J]. arXiv preprint arXiv:2106.09086, 2021. [3] Tian Y, Gong Q, Jiang Y. Joint policy search for multi-agent collaboration with imperfect information[J]. Advances in Neural Information Processing Systems, 2020, 33: 19931-19942. --- Rebuttal Comment 1.1: Title: Thanks Comment: Thanks for your response! It all sounds good and I have no further questions at this time.
Rebuttal 1: Rebuttal: ### **Global response** Dear Reviewers, Thank you very much again for your helpful comments. We appreciate your recognization of our work and would like to engage with you in our responses to your questions/comments. If you have any questions about our work or our response, we would be happy to discuss them further. Best Regards, the Authors. Pdf: /pdf/27ca8fbcc0fe819cc3b6b452281fb3d465a8386c.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Pre-training Contextualized World Models with In-the-wild Videos for Reinforcement Learning
Accept (poster)
Summary: This paper presents a method for learning world models from in-the-wild videos. By utilizing a context encoder to capture contextual information, the proposed method explicitly models both the context and dynamics to facilitate knowledge transfer across scenes. Experiments are performed on various simulation benchmarks such as Meta-world, DMC Remastered, DeepMind Control Suite, and CARLA. Strengths: The motivation for learning better world models by disentangling context information and dynamics is clear and seems reasonable to me; The proposed method for learning context information is simple and straightforward; The paper is well-organized and easily read. Weaknesses: The proposed framework is based on the assumption that the context information lies equally in each frame, however, it is very likely that some moving objects might be occluded at some time. There are no specific designs for handling these situations; The experiments mainly validate the sample efficiency of the RL training process, there is no sufficient ablation study on the learned context information; Compared to the vanilla WM baseline, the performance of RL training on the Meta-world benchmark seems not very impressive; The predicted video frames seem not very promising; There is no discussion of limitations and failure cases. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: The authors propose to randomly select a frame for predicting the context feature, have the authors tried to use a fixed frame (e.g., the first frame or the last frame of the video clip)? Would the context feature extracted from the different frames be consistent with each other? And how would choosing different frames affect the final performance of the RL training? It is assumed that the learned context feature captures the static properties of objects (e.g., colors, shapes), would modifying the context feature allow us to generate diverse video frames (e.g., changing the color of the object)? It would be interesting to show more temporal consistent video frames by editing the context feature. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 3 good Contribution: 1 poor Limitations: There is no discussion of limitations and failure cases. Flag For Ethics Review: ['No ethics review needed.'] Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Many thanks to Reviewer MYvK for providing insightful comments and questions. **Q1**: Discussion and ablation study on context frame selection **Single vs. multiple**: We agree with the reviewer that, in general, a single context frame cannot provide perfect contextual information, and it is challenging to learn fully disentangled contextual and dynamics representations. But our intuition is that **propagating most of the contextual information through a separate encoder can facilitate shared dynamics knowledge transfer between visually distinct pre-training and downstream domains**. Important dynamics information, such as occluded moving objects, can also be captured by the dynamics branch, as RSSMs can handle partial observability. As a first step towards broadly applicable world model pre-training, our experiments in various domains support that a single context frame can overcome complicated context distribution to unlock positive transfer. We have discussed the limitations and future work to utilize multiple frames in more complicated tasks (see $\underline{\text{Q1 in the global response}}$). **Feature consistency**: Our random selection strategy can encourage consistency between context features extracted from different frames. We note that they are not required to be strictly consistent, as our cross-attention mechanism allows contextual information to propagate adaptively between different frames (see $\underline{\text{Fig.8 of main paper}}$ for example). **Experiment results**: To support the discussion, we have experimented using a fixed frame (the first and last one) and multiple frames (randomly sampled three frames concatenated as inputs of the contextual encoder). Results are presented in **$\underline{\text{Fig. 3 of the global response attachment}}$**. We conclude that **different context frame selection schemes do not significantly affect performance**. Utilizing multiple frames does not provide benefits, probably because the experimental environment is simple or the way incorporating multiple frames is crude. **Q2**: Predicted video frames seem not very promising We note that SSv2 and YoutubeDriving datasets are large-scale in-the-wild datasets. Generating high-fidelity videos on these domains is especially difficult, and little literature has successfully achieved this. Developing generative models of videos is still a rapidly evolving and immature field. However, as pointed out at the beginning of the paper, through large-scale video pre-training of a world model, **our work aims to boost sample efficiency of downstream model-based RL, rather than fidelity of video prediction**. In order to learn beneficial representation for MBRL, we believe world models should focus on essential dynamics information, such as object positions and motions, instead of low-level visual details. This naturally motivates our decision-oriented design facilitating separate contexts and dynamics modeling. We present predicted video frames in $\underline{\text{Fig. 8 of main paper}}$ in order to illustrate that ContextWM can learn important dynamics features from large-scale pre-training, which benefits downstream MBRL. On simpler datasets, e.g., Human3.6M, our model can make much better predictions. However, as shown in $\underline{\text{Fig. 7b of main paper}}$, pre-training on this dataset lacking in diversity can even hurt our ultimate goal: sample efficiency of visual control learning. Advances in generative models can help develop stronger backbones for world models but are not the focus of this work and are orthogonal to our contributions. **Q3**: Generating diverse video frames Thanks for the insightful question. We have experimented with modifying the context feature to generate novel videos. We use the DMCR domain as our testbed as it supports modifying visual factors. While it is difficult to manipulate context features directly, we have done a workaround by sampling a frame from another trajectory to extract contextual information. In this way, we can modify the agent's color and the background's texture. As shown in $\underline{\text{Fig. 1 of the global response attachment}}$, ContextWM can make temporal consistent video predictions by correctly combining the new contextual information with the original dynamics information. For further details, see the qualitative analysis part of $\underline{\text{Q2 in the global response}}$. **Q4**: Performance gain on Meta-world Following prior work, Meta-world performance is measured by the success rate of 10 episodes, which naturally has a high variance. We have made a great effort to solidify our results, including massive repeated experiment runs. In $\underline{\text{Fig. 5b of main paper}}$, we have demonstrated the **statistical significance of our improvement over vanilla WM with a clear margin, aggregated over 48 runs on six Meta-world tasks**, following the protocols of APV and Agarwal et al. [1]. For particular tasks, e.g., Drawer Open, our method can learn with only half of the environment interactions, compared to the baseline. [1] Agarwal et al. Deep reinforcement learning at the edge of the statistical precipice. NeurIPS, 2021. **Q5**: Limitations and failure cases We apologize for the limited discussion of limitations. We will add a detailed discussion in a future revision. Please see $\underline{\text{Q1 in the global response}}$ for the revised discussion on limitations. For failure cases, we have observed that it does not always provide significant gains when pre-training our model on video datasets from different domains or of different amounts (e.g., Human3.6M). We have discussed possible reasons in corresponding paragraphs in the experimental section. --- Rebuttal Comment 1.1: Title: POST-REBUTTAL Comment: I acknowledge the authors' dedication to incorporating the updates. Following a meticulous examination of the rebuttal materials, my reservations regarding the modest advancement in performance and the absence of profound technical contributions persist. Given these considerations, I am leaning toward upholding my initial evaluation. --- Reply to Comment 1.1.1: Title: Response to Post-rebuttal Feedback by Reviewer MYvK Comment: Dear Reviewer MYvK, Thank you again for your time and effort in reviewing our paper. We appreciate your careful review of our rebuttal materials and your recognition of our efforts in incorporating updates. We recognize and respect the diverse perspectives regarding the significance of a paper. However, due to the dramatic inconsistency between our opinions, we **kindly request your reconsideration of your reservations on the advancement in performance and technical contributions**. We want to highlight that in accordance with the [NeurIPS 2023 Reviewer Guidelines](https://neurips.cc/Conferences/2023/ReviewerGuidelines), we need specificity, flexibility, and timeliness in your reviews in order for us to better address your concerns. While we've endeavored with full-time efforts to address your concerns, unfortunately, we've observed that your first feedback logged on 10 hours ago is somewhat vague and limited in specificity and evidence. Here, we will provide further responses to address your concerns, hopefully to your satisfaction, and to help the other reviewers understand the opinions from both sides. **(1)** Performance advancement Please kindly refer to our $\underline{\text{Q4 in our rebuttal}}$ above and our further clarification below. We have made a great effort to support the statistical significance of our improvement and compared our method with typical baselines from previous RL literature, including **DreamerV2 in our main paper, DrQ-v2/Iso-Dream in our supplementary material, and DreamerV3/TransDreamer in our rebuttal**. Our results consistently demonstrate the superior efficacy against these typical RL baselines across various domains and tasks, showing the benefits of our in-the-wild pre-training (IPV) framework and the contribution to the RL community. Regarding improvement upon our most relevant baseline APV [1] (named as 'IPV w/ vanilla WM' in our paper), it is still statistically significant in Fig. 5b of our paper, aggregated across 48 runs over six tasks of Meta-world. Note that the improvements of 'ContextWM (Pre: O)' against 'ContextWM (Pre: X)' and 'vanilla WM (Pre: O)' are **of a comparable magnitude with improvements made by previous publications** (for example, 'APV (Pre: O / Int: O)' against 'APV (Pre: X / Int: O)' in Fig. 6b of APV paper [1]). While APV makes this improvement with a domain-specific pre-training, we utilize more broadly applicable in-the-wild pre-training. **(2)** Technical contributions Please kindly refer to our $\underline{\text{Q2 in our rebuttal}}$ above and we apologize that we did not sufficiently state our contribution in the rebuttal for you. As stated, **our major technical contribution is to unleash the power of in-the-wild pre-training from videos to boost the sample efficiency of downstream MBRL**. Making world models benefit from in-the-wild pre-training is a critical precondition to scale up to big data and large models since it provides world knowledge widely generalizable and applicable to various downstream tasks. As Reviewer *wPqn* pointed out, 'learning world models on in-the-wild videos is hard', and we highlight that **no previous work has demonstrated positive transfer of a world model from in-the-wild videos** (see Fig. 8c of APV paper [1]). Motivated by the intricate property of in-the-wild contexts, we propose Contextualized World Models, a framework to explicitly separate contextual information and encourage shared dynamics modeling. Our experiments support that **our model successfully breaks the transfer barrier**. Overall, we have systematically studied **a new problem** (IPV, in-the-wild pre-training from videos), proposed **a new method** tailored for this problem (ContextWM), and demonstrated **significant performance gain** across various domains, which we believe all contribute to the community and help pave the path ahead toward general world models. [1] Seo, Y., et. al. Reinforcement learning with action-free pre-training from videos. ICML 2022. We hope that these responses can address your issues and shed light on the significance and solidity of our work. Could you please consider re-evaluating our work based on the updated information? We remain eager to address any lingering concerns and value an open and interactive discussion. Looking forward to your reply. Best regards, Authors. --- Rebuttal 2: Title: Your (reviewer) response to the author rebuttal is missing. Please do it ASAP. Comment: Dear Reviewer, The author has posted their rebuttal, but you have not yet posted your response. Please post your thoughts after reading the rebuttal and other reviews as soon as possible. All reviewers are requested to post this after-rebuttal-response. --- Rebuttal 3: Title: Discussion period ends soon Comment: Dear Reviewer MYvK, As the Reviewer-Author discussion period concludes soon, we kindly request your feedback on our rebuttal and post-rebuttal response. **We've earnestly addressed your concerns about performance advancement and technical contributions**. We appreciate your feedback on whether our responses meet your expectations. If any concerns remain, we're eager for further discussion. If you find our responses satisfactory, we hope for your reconsideration in assessing our paper. Thank you for your valuable time and consideration. We anticipate your response. Best regards, Authors
Summary: Learning a world-model that can generalize to different domains and tasks is difficult. The authors enhanced an existing framework for pre-training world models using in-the-wild videos, which can be fine-tuned on downstream tasks. In particular, the authors introduce a contextual encoder which helps in disentangling temporal dynamics from static contexts. Additionally, they include a cross-attention mechanism and dual reward predictors to improve the learning of task-relevant representations. Strengths: The manuscript is very well written and structured. ● The idea of using a context to encode static information is novel, well-motivated and it shows good results in the DMC remastered task ● Learning world models on in the wild videos is hard (and so far does not help downstream tasks, as shown by Seo et al. [49]) - The extension of the Action-Free Pre-training from Videos approach to in-the-wild videos is convincing and the context modulation via a U-Net for reconstruction is innovative. ● The authors perform several relevant ablations to illustrate the role of the different proposed components. Weaknesses: Overall, the context is only used together with the latent dynamic to decode the image. Is there a clear reason why the context is not taken into account for predicting the reward (in Fig. 3b the arrow of the context that goes to the reward seems to be misleading)? As the authors stated, the context might implicitly encode some important information about the task, e.g., the static position of an object, which can be helpful during the fine-tuning phase. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: ● What does the context encode? The motivation and the qualitative analysis are convincing, but it would also be useful to test this further. This could be done via a decoding analysis. In particular, it would be very interesting to see the difference between the context and the dynamic in the case of the DMC remastered task, where the contextWM provides a significant advantage. ● Why do you think is the gap in performance to prior methods particularly strong for DMCR? ● Looking at figure 6b (bottom), the performance of the pre-trained ContextWM on the SSv2 dataset for the CARLA driving task seems to outperform the one that is not-pretrained. However, the effects of the dataset domain of figure 7b (right) shows that there is almost no difference between the one pre-trained on SSv2 and the one without pre-training. What am I missing? ● Does the contextWM help to achieve better generalization? Could one, e.g., purposefully change the color, size or shape of the object for the meta-world and achieve better performance than the WM? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: Fine Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely thank Reviewer wPqn for providing a detailed review and insightful questions. **Q1**: Utilizing contexts for predicting the reward We agree that, in general, contextual and dynamics information are both important for task-relevant predictors (reward predictor, actor, and critic). We can also design cross-attention mechanisms for these MLP predictors. However, since contextual information in our architecture has a complicated structure (multi-scale CxHxW feature maps), this may bring extra design and implementation efforts, additional hyperparameters, and computation costs. To maintain a simple architecture and fair comparisons, we opt to predict rewards with only dynamics features and utilize our proposed dual reward predictor structure to encourage completely task-relevant feature encoding, which has been shown to work well in experimental benchmarks. Incorporating contexts and dynamics information for reward prediction and behavior learning in more complicated tasks is a promising future direction. For the misleading Fig. 3b, we will revise it to make it clearer in a future revision. Thanks for your valuable suggestion. **Q2**: Visualization in the case of the DMCR tasks Thanks for the valuable suggestion of a decoding analysis on the DMCR domain. To demonstrate the difference between context and dynamics, we conduct a **compositional decoding analysis** by sampling a random frame from another trajectory to replace the original context and leave dynamics unchanged. Our ContextWM shows excellent compositionality as it can correctly combine the new contextual information with the original dynamics information. We conclude that in this domain, the context encodes static visual factors such as the agent's body color, the background's texture, etc. For further details, please refer to the qualitative analysis part of $\underline{\text{Q2 in the global response}}$. **Q3**: Particularly strong performance on DMCR tasks Thanks for your insightful question. Please refer to $\underline{\text{Q2 in the global response}}$ for the detailed response. **Q4**: Effects of pre-training with SSv2 on CARLA We apologize for a mistake of plotting: **the 'w/o Pre-train' curve in Fig. 7 (CARLA) should be the same as the 'ContextWM (Pre: X)' curve in Fig. 6 (bottom)**, but was incorrectly plotted as the 'vanilla WM (Pre: O)' curve in Fig. 6 by mistake. We have carefully checked the figures to ensure there are no other mistakes and will correct this in a future revision—many thanks for pointing it out. **Q5**: ContextWM promotes generalization We believe that our ContextWM can promote generalization due to better design of context and dynamics modeling. While changing the color, size, or shapes in Meta-world is difficult, we have conducted similar experiments in another benchmark, DMC Remastered, which randomly resets all visual factors on the initialization of each training and evaluation episode. As shown in $\underline{\text{Fig. 6 of main paper}}$, ContextWM outperforms vanilla WM significantly on DMCR, both with or without pre-training. These results demonstrate that **ContextWM can achieve better generalization and performance in unseen visual environments**. Measuring out-of-distribution generalization ability (e.g., training on standard DMC/Meta-world and testing on visually modified ones) of ContextWM and vanilla WM is an interesting future direction. --- Rebuttal Comment 1.1: Comment: Thank you for the detailed rebuttal to the comments of the other reviewers as well as mine. I think the paper was already good and also improved during the rebuttal period. Thus, I maintain my accept score (7). I hope the other reviewers, who had lower scores, can check out the response! --- Reply to Comment 1.1.1: Title: Appreciation for Your Support Comment: Dear Reviewer wPqn, Your support and maintained acceptance score are sincerely appreciated. Thank you for recognizing our improvements and suggesting that other reviewers check our responses. Best regards, Authors
Summary: This paper studies whether large-scale in-the-wild datasets can be used to pre-train world models for efficient downstream reinforcement learning. Specifically, they introduce Contextualized World Models (ContextWM), an architecture specifically designed to learn to separate context and dynamics modeling. Their experimentation reveals that the proposed methodology outperforms DreamerV2 on a variety of downstream tasks. Strengths: - Well-written. The paper is well-written, and the figures aid in the understanding of the methodology. - Variety of experiments. The authors conduct a variety of experiments, comparing not just to DreamerV2, but also analyzing the effects of pre-training, architecture choices, dataset domain, etc. Furthermore, both quantitative and qualitative comparisons are included. - Strong experimental. Relative to DreamerV2, the proposed methodology achieves a strong performance -- the gap seems to be particularly large for the DMC Remastered tasks. Weaknesses: - Missing comparison to prior work. APV [49] is the most similar prior work which this work builds off of (and seems to be the SOTA in this space), and yet the proposed method is not compared to APV. Many of the tasks used, hyper-parameters, and evaluation protocols adopted are from APV, allowing for a comparison, yet somehow, this comparison is omitted. Looking at the results plots in the APV paper, visually, the performance of ContextWM seems similar to that of APV (and in some cases clearly worse, e.g. in the dial turn task). While one may claim that a comparison to APV may be unfair because both the data and the model would be different, this would still reveal whether the ability to use a larger amount of in-the-wild data as well as the changes to the model architecture are actually beneficial. Furthermore, training APV on this large-scale data and seeing whether ContextWM outperforms it would be an experiment which would reveal whether the architectural changes proposed are significant. Finally, APV primarily compares to DreamerV2 as this was the SOTA when APV was published -- now, DreamerV3 seems to be the SOTA, so a comparison to DreamerV3 should be conducted. - Incomprehensive ablations. It is not clear which one environment the ablation study is conducted on. It would be a lot more convincing if the ablation study was done across tasks, and if the trends held true across tasks as this would alleviate concerns of cherry-picking a task. Technical Quality: 2 fair Clarity: 4 excellent Questions for Authors: - Is it possible to qualitatively demonstrate that ContextWM is able to separate context and dynamics modeling? - For some tasks, pre-training and choice of data to pre-train with makes a huge difference, e.g. DMC Re-mastered. For others, not so much, e.g. CARLA. Is there a sense of why this is the case? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 4 excellent Contribution: 2 fair Limitations: There is no discussion on the limitations of the work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Many thanks to Reviewer ajER for providing a thorough review and valuable questions. **Q1**: **Comparison with APV** We **respectfully disagree with the comments that we do not compare with APV**. We apologize for not clarifying that **our 'IPV w/ vanilla WM' in $\underline{\text{Fig. 5 and 6 of main paper}}$ is the APV baseline trained on the same data**, equipped with the stacked latent model and intrinsic bonus proposed by APV. We brand it with the new name to emphasize that it is pre-trained on **I**n-the-wild data rather than only **A**ction-free data. Furthermore, since neither the one-layer RSSM in Dreamer nor the stacked RSSM in APV has a contextualized component as ContextWM, we named them vanilla WM at the beginning of our experimental section. We will change to more proper names in a future revision. In a word, **we have already trained APV on large-scale data and demonstrated that the architectural changes proposed are significant**. We also note that the performance gap with the original APV comes from different pre-training datasets. **Using the same curated RLBench dataset, our ContextWM (see $\underline{\text{Fig. 7b of main paper}}$) can outperform originally reported APV results.** (Interestingly, we also find that the dial turn task can only benefit from RLBench data, regardless of the architecture.) It is unsurprising that pre-training data from a similar domain can further benefit downstream tasks. Nevertheless, **our methodology contribution unleashes the power of diverse video datasets instead of curated domain-specific ones to enable general-capable world model pre-training**. While the SSv2 dataset does not help downstream tasks in the original APV, our ContextWM pre-trained with it has been shown to benefit various control tasks. **Q2**: Comparison with DreamerV3 Since our ContextWM is built upon DreamerV2, it is natural to compare it with DreamerV2 to reveal the significance of in-the-wild pre-training and the proposed architecture. **Our technical contributions are orthogonal to specific model-based RL methods** and can also combine with DreamerV3 to further improve performance, which is left for future work. Nevertheless, we conduct preliminary comparisons to DreamerV3 without pre-training. Results are presented in $\underline{\text{Fig. 4 of the global response attachment}}$. We conclude that even with several improved training techniques, DreamerV3 is still inferior to our method, showing the significance of in-the-wild video pre-training and explicit context modeling. **Q3**: **Clarification on ablation study** Following the protocol of Agarwal et al. [1] and APV, **we conducted the ablation study on all the tasks and reported aggregated results**. Explanations of our ablation study results can be found in $\underline{\text{Sec. 5.1}}$ and the captions of $\underline{\text{Fig. 5, 6, 7}}$. We will clarify it further in a future revision. [1] Agarwal et al. Deep reinforcement learning at the edge of the statistical precipice. NeurIPS, 2021. **Q4**: **Qualitative evaluation** While it is challenging to learn fully separated representation, we have provided qualitative evaluation in $\underline{\text{Sec. 5.5 and Fig. 8 of main paper}}$ to demonstrate the ability of ContextWM to separate contexts and dynamics. We also provide additional demonstrations in **$\underline{\text{Fig. 1 of the global response attachment}}$** to show that our model finetuned on the DMCR domain successfully learned disentangled representations of contexts and dynamics. For further details, please refer to the qualitative analysis part of $\underline{\text{Q2 in the global response}}$. **Q5**: Particularly significant performance gains on DMCR tasks Thanks for your insightful question. Please refer to $\underline{\text{Q2 in the global response}}$ for the detailed response. **Q6**: Limitations We apologize for the limited discussion of limitations. We will add a detailed discussion in a future revision. Please see $\underline{\text{Q1 in the global response}}$ for the revised discussion on limitations. --- Rebuttal 2: Title: Your (reviewer) response to the author rebuttal is missing. Please do it ASAP. Comment: Dear Reviewer, The author has posted their rebuttal, but you have not yet posted your response. Please post your thoughts after reading the rebuttal and other reviews as soon as possible. All reviewers are requested to post this after-rebuttal-response. --- Rebuttal 3: Title: Request of Reviewer's attention and feedback Comment: Dear Reviewer ajER, Thanks again for your dedication to reviewing our paper. We write to kindly remind you that this is the last few days of the Reviewer-author discussion period. We have made every effort to address the concerns you suggested and improve our paper: - We **clarify that the 'IPV w/ vanilla WM' in the main paper is the APV baseline** trained on the same data as ours. Thus we have already trained APV on large-scale data and demonstrated that our architectural changes are significant. - We **provide additional comparison with DreamerV3**, where our model still performs the best. - We **clarify that the ablation study is conducted on all the tasks** and results are reported in aggregated forms. Ablation across all tasks show a consistent improvement of our model against baseline methods. - We **provide additional qualitative evaluation results** in the global response to show that our model is able to separate context and dynamics modeling. - We **explain the reason behind particularly significant performance gains** on DMCR tasks. Please kindly let us know if you have any remaining questions. If our responses have addressed your concerns, would you please consider re-evaluating our work based on the updated information? Looking forward to your reply. Sincerely, Authors --- Rebuttal 4: Title: Discussion period ends soon Comment: Dear Reviewer ajER, On this last day of the Reviewer-Author discussion period, we respectfully extend a final request for your valuable feedback on our rebuttal. Your perspective would greatly contribute to the thorough evaluation of our work. **Taking your suggestions, particularly regarding the experiments**, we believe that we have made a great effort to provide all the experiments and clarifications that we can. If our rebuttal has addressed your concerns, we hope the reviewer will reconsider the evaluation of our paper. We remain open to any further discussions. We sincerely extend gratitude for your dedicated review efforts and anticipate your response. Best regards, Authors --- Rebuttal Comment 4.1: Comment: Thank you for clarifying that the 'IPV w/ vanilla WM' vanilla baseline in the main paper is actually APV. This addresses my main concern that the proposed methodology was not fairly compared to prior work. The reviewer also agrees with the comments about DreamerV3 being orthogonal to the contribution of this work. Given that the misunderstanding has been resolved, and as the rebuttal has adequately addressed all of my concerns, I am increasing my rating to 7: Accept. --- Reply to Comment 4.1.1: Title: Appreciation for Your Feedback and Support Comment: Dear Reviewer ajER, We sincerely appreciate your thoughtful re-evaluation of our paper and the subsequent rating adjustment. Your recognition of our contributions and the resolution of misunderstandings greatly encourage us. Your valuable input has undoubtedly enhanced the quality of our work. Thank you for your dedicated engagement and support. Best regards, Authors
Summary: This paper proposes a Contexturelized World Model with In-the-wild Video Pretraining, which extends recently proposed action-free pre-training from videos (APV) to the case of contextualized video-prediction models. Specifically, they propose to sample a randomly chosen frame and use it as "contextualized information" for the prediction model. The "contextualized" information is combined into the prediction model via multi-scale cross-attention mechanisms. Besides, during the model-based RL phase, they propose to predict both pure task reward and the sum of the task reward plus a weighted intrinsic reward. Empirical validations on several well-known benchmarks (Meta-world, DMC, and CARLA) show the superior performance of the proposed method. Strengths: 1. The paper is generally well-written and easy to follow. 2. While the proposed method is simple, experimental results show a stable performance gain by the proposed methods. 3. Besides the main contribution, the paper also provides an interesting analysis of the choice of pre-training datasets. Weaknesses: 1. Technical novelty of the proposed method is not high. The main proposal is to add contextualized information to facilitate a better prediction. Besides, the paper does not discuss how the proposed approach (random sample selection) incorporates contextualized information well. Especially, I'm not sure we can call the random variable $c$ a contextualized vector since it is computed via only single frames rather than the full context of the trajectories. In this sense, I'm wondering what happens if we predict/reconstruct observations using the same multi-scale cross-attention architecture but with the image from the same time step for each frame. In the setup, the model does not use contextualized information but has similar architecture to the proposed method, and thus more appropriate for the baseline. 2. Lack of discussion with transformer-based world models. As shortly discussed, several studies incorporate transformer architecture in the prediction model. Since the transformer architecture predicts the future via autoregressive fashion, I think it is more natural to handle the contextualized information. However, the current manuscripts lack discussion on this point and comparison with transformer-based world models Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. Did you try methods other than random sample selection to compute contextualized information? 2. Did you try a comparison with transformer-based architecture? 3. The logic behind several sentences is hard to capture. Could you please explain more about these sentences? - In line 182, "Nevertheless, ..... Therefore, we propose a dual reward predictor" => Why does the dual predictor resolve the issue? Why not just balance lambda depending on the task? - In line 234, "indicating that the performance gain of IPV with vanilla WM is primarily due to the intrinsic exploration bonus. " => The logic is unclear. [Minor] - In eq. 6, How do you compute KL when t=1? - In Fig. 5-b, Pre: O should be Pre: ✓ or with Pre or something like that. - If I correctly understood, Fig. 3 is a bit misleading as a context variable $c$ is directly input into each frame, rather then recurrent prediction. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: A limitation section should be added. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Many thanks to Reviewer JHvL for providing an insightful review and valuable comments. **Q1**: **How we incorporate contextual information** **Clarification**: We apologize for using the ambiguous term 'context' without elaborate clarification. Videos and visual control trajectories are **spatiotemporal** data. They have contexts in the temporal dimension (namely the history) and contexts in the spatial dimension (namely visual details, e.g., colors, shapes, and layouts of objects). 'Contexts' in our paper stands only for **spatial contexts**. A context frame is selected to extract (static) contextual information in a multi-scale manner, which encourages the latent dynamics model to focus on temporal dynamics instead of wasting model capacity on capturing low-level visual details. It is prevalent in the literature to utilize a reference image to condition generative models of videos [1, 2]. For **temporal context** modeling, we use standard RSSMs from Dreamer to model history observations. More powerful sequential backbones such as Transformers can be explored, but it is orthogonal to our technical contribution of explicitly modeling (spatial) contexts. **Novelty**: Reviewer kJR6 and wPqn both recognize our methods as innovative and well-motivated. To the best of our knowledge, we are the first to separately model contextual information for world models to handle complicated spatiotemporal data. **Empirical evidence**: We have shown that our model successfully unleashes the power of in-the-wild video pre-training and obtains significant gains on downstream tasks. Qualitative evaluation in $\underline{\text{Fig. 8 of main paper}}$ and $\underline{\text{Fig. 1 of the global response attachment}}$ also supports that our model can separate contexts and dynamics. **Reviewer-proposed baseline**: If understood correctly, the proposal from the reviewer, which uses the image from the same step as the context for each frame, can hardly learn useful representations for MBRL. Note that we extract multi-scale shortcuts from the context frame in a U-Net manner. Learning to reconstruct each frame conditioned on one fixed context encourages the model to learn temporal variations. However, when conditioned on the context frame from the same step for each frame, the model can learn to trivially copy low-level feature shortcuts for reconstruction. Our experiments of this baseline support our justification, where **the norm of RSSM features rapidly shrinks to zero during training**. [1] Singer et al. Make-a-video: Text-to-video generation without text-video data. [2] Esser et al. Structure and content-guided video synthesis with diffusion models. **Q2**: Other context frame selection methods. As shown in $\underline{\text{Fig. 3 of the global response attachment}}$, we have experimented with the first or the last frame as the context but found no significant performance difference. We have also tried randomly selecting three frames as the context and still obtained no performance gain, which indicates that a single frame is adequate for context modeling in our experimental benchmarks. **Q3**: Discussion and comparison with transformer-based architecture As discussed in Q1, transformer-based architecture is immature and **orthogonal** to our technical contribution. A combination of powerful transformer architecture and our framework has the potential to improve performance further, which is left for future work. Nevertheless, we have conducted preliminary comparisons to a transformer-based method, TransDreamer [3], without pre-training. Results are presented in $\underline{\text{Fig. 4 of the global response attachment}}$. We observe that even equipped with transformers, TransDreamer performs similarly to Dreamer and is inferior to our method. We do not compare with recently published work (IRIS, NeurIPS 2022 and TWM, ICLR 2023) since they only support discrete control (Atari) and utilize different actor-critic learning schemes, which cannot be directly compared with our work on continuous control. [3] Chen et al. TransDreamer: Reinforcement learning with transformer world models. 2022. **Q4**: Clarification on sentences - Line 234: Our 'IPV w/ vanilla WM' baseline mainly has two major differences with DreamerV2: in-the-wild video pre-training and video-based intrinsic bonus. Although 'IPV w/ vanilla WM' outperforms DreamerV2, our ablation study shows that this baseline can only benefit from the intrinsic bonus, but not in-the-wild pre-training. - Line 182: As shown, the intrinsic bonus is essential for learning efficiency, but it is computed using an ever-changing replay buffer during training. An additional predictor for pure task reward can force the dynamics model to encode task-relevant information, regardless of the intrinsic reward drifts, which helps representation learning. We do not downweight intrinsic reward since it needs extra hyperparameter tuning and, more importantly, may hurt exploration and learning efficiency. **Q5**: Minor questions - KL term: When t=1, the terms should be $\text{KL}[q(z_1|o_1)\|p(\hat{z}_1)]$ and $\text{KL}[q(s_1|z_1)\|p(\hat{s}_1)]$. Following the implementation of Dreamer, priors and posteriors of $z_1$ and $s_1$ are predicted with dummy previous states $z_0, s_0$ (all-zero initial states of RSSM) and actions $a_0$ (all-zero too), which unifies the implementations for t=1 and t>1. - Legends and Fig. 3: We appreciate the suggestions and will use check marks for the legend and revise Fig. 3 to make it clearer in a future revision. **Q6**: Limitations We apologize for the limited discussion of limitations. We will add a detailed discussion in a future revision. Please see $\underline{\text{Q1 in the global response}}$ for the revised discussion on limitations. --- Rebuttal 2: Title: Your response to the author rebuttal is missing. Please do it ASAP. Comment: Dear Reviewer, The author has posted their rebuttal, but you have not yet posted your response. Please post your thoughts after reading the rebuttal and other reviews as soon as possible. All reviewers are requested to post this after-rebuttal-response. --- Rebuttal 3: Title: Request of Reviewer's attention and feedback Comment: Dear Reviewer JHvL, Thanks again for your dedication to reviewing our paper. We write to kindly remind you that this is the last few days of the Reviewer-author discussion period. We have made every effort to address the concerns you suggested and improve our paper: - We **clarify our usage of the term 'context' to dispel misunderstandings**. 'Contexts' in our paper stands only for spatial contexts, whereas the temporal context is handled by standard RSSMs. We provide additional results in the global response to show that our model can clearly separate contexts and dynamics. - We **experiment with other context frame selection methods** and show that our method is adequate for context modeling in our experimental benchmarks. - We **explain why transformer-based architecture is orthogonal to our technical contributions**. Nevertheless, we provide additional comparison results against transformer-based methods, where our method still performs the best. - We clarify and revise our writing on several potentially misleading sentences. Please kindly let us know if you have any remaining questions. If our responses have addressed your concerns, would you please consider re-evaluating our work based on the updated information? Looking forward to your reply. Sincerely, Authors --- Rebuttal Comment 3.1: Comment: Thank you for providing a detailed response. I have read the answer and other reviewers' comments. To a good extent, the rebuttal resolves my concerns, and I am happy to increase my score. However, I still think the comparison between Transformer-based architecture should be investigated deeper, as, from the definition of the paper, all transformer-based architecture could be regarded as ``contextualized``. Besides, the transformer-based architecture might be easier to incorporate pre-training on video without any mechanism to add layers (just changing conditioning variables might be enough). --- Reply to Comment 3.1.1: Title: Appreciation for Your Support and Constructive Feedback Comment: Dear Reviewer JHvL, We sincerely appreciate your careful review of our rebuttal and your thoughtful reconsideration of your assessment. Your feedback has been invaluable in strengthening our paper. We acknowledge that while our contextualized image decoder's cross-attention mechanisms resemble 'transformer layers' for contextual information conditioning, your suggestion to thoughtfully craft a dedicated transformer-based architecture is essential. Furthermore, your perspective on the flexibility of transformer-based architecture to incorporate pre-training on video by changing conditioning variables is also truly insightful. As previously discussed, the combination of our pre-training framework with a transformer architecture holds great potential, and we will certainly add discussion regarding this in our revised paper, and delve deeper into this aspect in future work. Thank you for your constructive input. Best regards, Authors
Rebuttal 1: Rebuttal: ## Global Response to All Reviewers We would like to thank the reviewers for their detailed comments. This paper aims to pre-train a broadly generalizable world model from in-the-wild videos to boost sample-efficient learning of downstream visual control tasks. Extensive experiments on large-scale video datasets and various visual control domains have demonstrated the effectiveness of our proposed In-the-wild Pre-training from Videos (IPV) with ContextWM. We have made every effort to address all the reviewers' concerns and responded to the individual reviews below. We have also answered common questions raised by the reviewers in this global response. Note that **all the new figures supplementary to all responses are included in the PDF attachment of this global response**. We only present results on part of the tasks due to limited time and computational resources. **Q1**: Revise the discussion on limitations We apologize for the limited discussion of limitations. We will add the following expanded discussion on limitations in a future revision: > **Limitations and future work.** One limitation of our current method is that a randomly selected single context frame may not be sufficient to capture complete contextual information of scenes in the real world. Consequently, selecting and incorporating multiple context frames as well as multimodal information [47] for better context modeling need further investigation. Our work is also limited by medium-scale sizes in terms of both world models and pre-training data, which may hinder learning broadly applicable knowledge. Given that, an important direction is to systematically examine the scalability of our method by leveraging scalable architectures like Transformers [36, 54] and massive-scale video datasets [12, 37]. Lastly, our work focuses on pre-training world models via generative objectives, which use massive parameters inefficiently on image reconstruction to model intricate contexts. Exploring alternative pre-training objectives, such as contrastive learning [40, 7] or self-prediction [52], could further release the potential of IPV by eliminating heavy components on context modeling and focusing on dynamics modeling. **Q2**: Particularly significant performance gains on DMCR tasks Our method obtains considerable performance gains on DMCR tasks. The main reason is that DMCR is a purposefully designed benchmark, which measures visual generalization and requires the agent to extract task-relevant information as well as ignore visual distractors. Our ContextWM has the advantage of separately modeling contexts (task-irrelevant in DMCR) and dynamics (task-relevant in DMCR), which avoids wasting the capacity of dynamics models in modeling low-level visual details. Furthermore, pre-training with in-the-wild videos enables our models to eliminate diverse distractors and capture shared motions, which is essential for visual generalization in RL. In contrast, vanilla WM needs to model complicated contexts and dynamics in an entangled manner, which adds difficulty to dynamics learning and behavior learning on these features. **Qualitative analysis**: To demonstrate the ability of ContextWM to separate context and dynamics modeling, we provide additional video prediction results in **$\underline{\text{Fig. 1 of the global response attachment}}$**. In the _Context Shift_ row of the figure, we sample a random frame from another trajectory to replace the original context and leave dynamics the same as the _IPV w/ ContextWM_ row to conduct a **compositional decoding analysis**. We can see that after shifting the context, ContextWM correctly combines contextual information from the new context with dynamics information from the original trajectory. These results show that our model finetuned on the DMCR domain has successfully learned disentangled representations of contexts and dynamics. The vanilla WM, on the other hand, suffers from learning entangled features and, as a result, makes poor predictions about the environment transitions. **General-purpose framework**: We also emphasize that, motivated by separating contexts and enhancing temporal dynamics modeling, our proposed IPV w/ ContextWM is a general-purpose framework and, as shown, can obtain adequate performance gain on various benchmarks that have more complicated entangling of contexts and dynamics beyond DMCR. **Q3**: Suggestion on the legends and Fig. 3 of main paper Thanks for the valuable suggestions. We will use check marks for the legends to indicate 'with pre-training' and revise Fig. 3 to make the architecture clearer in a future revision. Pdf: /pdf/9c9977e2e53039a59a152ec338559d2fcee84e44.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: This submission presents the contextualized world model (ContextWM), a framework for leveraging in-the-wild videos for pre-training of a world model to be used in model-based reinforcement learning. Following the work from Seo et al. (2022), the authors pre-train an action-free version of the recurrent state-space model (RSSM) with two important modifications: 1) a context encoder processes a randomly sampled frame of the input video to provide context features, which the decoder can directly access to better reconstruct static visual details, enabling the dynamics model, on which the decoder is also conditioned, to focus on temporally varying information. 2) the authors also opt for using a dual-reward predictor during fine-tuning, which predicts the pure task reward in addition to the combined task and video-based intrinsic novelty reward proposed by Seo et al. (2022). This facilitates task-relevant representation learning. The ContextWM is evaluated on Meta-world, the remastered DeepMind Control Suite, and a task with varying weather conditions in the CARLA driving simulator. In most benchmarks, ContextWM shows significant improvements in terms of sample efficiency or final performance. An ablation study and other analytical experiments further show the effectiveness of ContextWM and its design decisions. Strengths: The proposed contextualized world model is novel and the design decisions are well motivated. For the most part the description of the method and experimental setup is very clear. The performance improvements are significant and in some cases very impressive. The qualitative analysis provides some interesting insights, for instance the clear separation of video representations for two videos with contrastive labels in Figure 8b. Weaknesses: 1. As the main contribution the cross-attention to the context should be explained in a bit more detail. Since the authors mention U-Nets, I wonder whether the decoder attends to context features at each of the corresponding three stages shown in Table 1 in the supplementary material or only at the decoder input level? Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: 2. Why did you choose BatchNorm instead of LayerNorm? Did you experiment with both? 1. maybe use a check mark in the legend of Figure 6b instead of the circle to indicate "with pretraining". 1. It'd be very interesting to see more examples with other contrastive labels for the Video representations experiment in Section 5.5. ## Acknowledgement of rebuttal I have read the rebuttal, other reviews. My relatively minor concerns have been addressed, given that the authors have provided some requested clarifications, a discussion of limitations, and additional insightful experiments. I strongly believe this paper should be accepted. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 4 excellent Presentation: 3 good Contribution: 4 excellent Limitations: The discussion of limitations is very *limited*. The discussion section mostly mentions future work on scaling and exploring other pre-training objectives. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely thank Reviewer kJR6 for providing a detailed review, valuable suggestions, and a positive evaluation of our paper. **Q1**: Details of cross-attentions in the decoder In $\underline{\text{Appendix C.3 in the supplementary material}}$, we have elaborated the details of how our multi-scale context features are connected to the decoder in a U-Net style: > The outputs of the last residual block of two stages in the context encoder (stage2 and stage3) before average pooling (thus in the shape of 16 × 16 and 8 × 8, respectively) are passed to the corresponding residual block of the image decoder and used to augment the incoming decoder features with cross-attention. We do not use the 32 × 32 features from stage 1 due to the quadratic memory complexity of cross-attention. We will clarify these details in the main text in a future revision. **Q2**: BatchNorm vs LayerNorm We chose BatchNorm as it is the dominant normalization technique in CNNs. Note that our technical contributions are orthogonal to the choice of visual backbones. Exploring transformer-based backbones (e.g., ViTs), which are usually equipped with LayerNorm, is left for future work. We have experimented LayerNorm replacing BatchNorm in our architecture. Results in $\underline{\text{Fig. 5 of the global response attachment}}$ indicate that our architecture is robust to the choices. **Q3**: Additional visualization of video representations We have provided additional examples in $\underline{\text{Fig. 2 of the global response attachment}}$. Given videos with two distinct labels, '*moving away from something with your camera*' and '*approaching something with your camera*', our ContextWM provides a clear separation of video representations while vanilla WM fails. **Q4**: Suggestion on the legends We appreciate this suggestion and will use a checkmark instead of the O mark to indicate "with pre-training" in a future revision. **Q5**: Limitations We apologize for the limited discussion of limitations. We will add a detailed discussion in a future revision. Please see $\underline{\text{Q1 in the global response}}$ for the revised discussion on limitations. --- Rebuttal Comment 1.1: Comment: I thank the authors for the clarifications and conducting additional insightful experiments (e.g. the model roll-outs with changed context features). I believe the paper should be accepted and I encourage my fellow reviewers to carefully study the author responses, since some of the criticism was based on misunderstanding, which the authors tried to resolve with detailed explanations. --- Reply to Comment 1.1.1: Title: Appreciation for Your Support Comment: Dear Reviewer kJR6, Thank you sincerely for your positive and encouraging feedback. We greatly appreciate your recognition of our efforts to address concerns and provide clarifications and we hope our explanations will help dispel any misunderstandings. Your recommendation for acceptance boosts our confidence in the value of our work. Best regards, Authors
null
null
null
null
null
null
Conservative State Value Estimation for Offline Reinforcement Learning
Accept (poster)
Summary: This paper proposes Conservative State Value Estimation (CSVE) for offline reinforcement learning, which directly penalizes the V-function on out-of-distribution (OOD) states and guarantees conservative value estimation under specific state distributions. The authors develop a practical actor-critic algorithm based on CSVE and evaluate its performance on classic continual control tasks. Strengths: The paper has both theoretical and empirical results. Weaknesses: The significance of the proposed method is not clear. To be more specific, compared with the penalization of Q-function for OOD actions of CQL, the advantage of penalizing the V-function for OOD states is not clear. This penalization alone cannot address the core issue of offline RL - overestimation and extrapolation error. Additional policy constraint needs to be incorporated in the proposed method. Direct penalization of the V-function in offline RL does not affect the Q-value of OOD actions (see Eq. 5 and 6). It is Q used in action selection rather than V, so the agent will still choose over-estimated OOD actions. The significance of the theories is not clear. The assumption $\text{supp}~ d \subseteq \text{supp}~ d_u$ is very strong and hard to satisfy. Besides, with this assumption satisfied, the algorithm can never penalize the value for OOD states (since $d$ is in-distribution). Theorem 3.2. and Theorem 3.3., that the expected value of the estimated V under $d$ or $d_u$ is a lower bound of the true values, is of little significance. It is still likely that V is severely over-estimated at the states out of $\text{supp}~d_u$. The paper is not well organized. The empirical evaluation is not sufficient to support the effectiveness of CSVE's core component - Eq. 5. Technical Quality: 1 poor Clarity: 1 poor Questions for Authors: I do not have additional questions. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 1 poor Presentation: 1 poor Contribution: 1 poor Limitations: The authors do not discuss about limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you very much for the comments. Nevertheless, we believe that there should be some misunderstanding points that require to clarify or discuss further. As commented by other reviewers, the paper does have clear technical contributions and evaluation. We explain on them as bellow. ## Q1: The significance of CSVE. > the advantage of penalizing the V-function for OOD states is not clear... This penalization alone cannot address the core issue of offline RL - overestimation and extrapolation error... Response: It is true that CSVE alone cannot address the OOD actions and derive a conservative policy. However, CSVE does have advantages by indirectly affect the Q-function and policy. By penalizing values for OOD states, CSVE reduces excessive exploration in OOD states while allowing reasonable exploration in those states close to the samples. This effect is generated in the second term of Eq. 5, and transferred to the Q-function via Eq.6. > Direct penalization of the V-function in offline RL does not affect the Q-value of OOD actions (see Eq. 5 and 6). It is Q used in action selection rather than V, so the agent will still choose over-estimated OOD actions. Response: Our methodology in Section 4.2 should address this concern while preserving the advantage of conservative state value estimation. As demonstrated in Eq. 9, our algorithm selects from in-sample actions and near-sample actions that have higher values even under conservative estimation. ## Q2: The significance of theory. > The issue of assumption $supp \ d \subseteq supp \ d_u$ Response: Let me clarify. In the theory section, the $d_u$ refers to the underlying state distribution with the behaviour policy, and thus its support is not only the samples in the dataset but the whole reachable space of states with the policy $u$. With this setting, it should be reasonable to assume $supp \ d \subseteq supp \ d_u$. Indeed, prior works including CQL, COMBO and most theoretic papers (see Table 1 in [3]) have the same assumption. This assumption is mainly for convenience in theoretic analysis. In practice, it could be satisfied by constraining the exploration in and near the dataset generated by $d_u$. ## Q3: The empirical evaluation is not sufficient to support the effectiveness of CSVE's core component - Eq. 5. Response: Directly measuring the effect of the components in Eq.5 is hard. Instead, we evaluated its effectiveness comprehensively through controlled experiments of baselines and alternative components of CSVE. The design logics is as bellow. - By comparing with CQL-AWR / IQL / AWAC, we assess the value estimation component of CSVE (Eq.5-6) versus CQL / IQL(expectile) / AWAC (normal TD-based value estimation), under the same policy extraction method AWR. - By comparing CSVE and the model-free CSVE in Appendix C.6, we verify the benefits of model-based next state sampling over model-free state perturbation used in Eq.5. - Ablation study. In Appendix B.1 and 5.2, we evaluate with varying hyper-parameters $\lambda$ and $\beta$ to assess how the policy extraction is affected by the conservative state value estimation. **Table R1**: Implementation comparision among different offline RL algorithms. Algorithm | Evalue estimation | Policy improvement | --- |:---:|:---:| CQL|Q-values with penality on OOD actions|SAC-style Actor| CQL-AWR|Q-values with penality on OOD actions|AWR + State exploration| IQL|Q-values with expectile regression|AWR| AWAC|Normal Q-values|AWR| COMBO|Q-values with penalty on OOD states and actions (via multi-step model rollouts)|SAC-style Actor| CSVE (default)|V-values with penalty on OOD states (via 1-step model rollouts) |AWR + State exploration| CSVE (Appendix C.6)|V-values with penalty on OOD states (via perturbing in-sample states) |AWR + State exploration| References: [1]CQL; [2]COMBO; [3] Pessimistic Model-based Offline Reinforcement Learning under Partial Coverage, 2022. --- Rebuttal Comment 1.1: Comment: Thank you for the reply. After reading the rebuttal, I still have the following questions or concerns. ## Q1 > The authors mentioned "This effect is generated in the second term of Eq. 5, and transferred to the Q-function via Eq.6." However in Eq. 6 both s and s' are from the dataset. Hence, I think the effect that "CSVE reduces excessive exploration in OOD states" can not be transferred to the Q-function via Eq. 6, since there is no OOD state in Eq. 6. ## Q2 I understand $d_u$ is the underlying state distribution with the behaviour policy. OOD states refer to the ones out of the distribution of $d_u$. It seems the authors have not addressed my concerns: 1. "Besides, with this assumption satisfied, the algorithm can never penalize the value for OOD states, since d is in-distribution." To be more specific, under the condition $supp d \subseteq suppd_u$, Eq. 5 can never penalize the value for OOD states $s \notin supp~d_u$. 2. "Theorem 3.2. and Theorem 3.3., that the expected value of the estimated V under $d$ or $d_u$ is a lower bound of the true values, is of little significance. It is still likely that V is severely over-estimated at the states out of $supp~d_u$." To be more specific, Theorem 3.2 and Theorem 3.3 in this paper only ensure underestimation of $\mathbb{E}_{s\sim d}V$ and $\mathbb{E}_{s \sim d_u} V$. Since both $d$ and $d_u$ are in distribution because of the assumption, the both theorems have no assurance for OOD states, whose value functions need to be underestimated the most. Besides, even in $supp~d_u$, some states may be highly overestimated. --- Reply to Comment 1.1.1: Comment: Thank you for the response. To facilitate further discussion, it is essential for us to have a clear understanding and agreement on the concepts of out-of-support and out-of-distribution, as there might be some confusion surrounding them. Out-of-distribution (OOD) and out-of-support refer to distinct aspects of data and probability distributions: OOD pertains to data points or samples that do not originate from the same underlying distribution as the training data, whereas out-of-support concerns data points or samples that possess a zero probability (or probability density) under a given probability distribution. Under the given assumption, both $d$ and $d_u$ are considered in support, but they do not belong to the same distribution. ## Q1 follow-up > the effect that "CSVE reduces excessive exploration in OOD states" can not be transferred to the Q-function via Eq. 6, since there is no OOD state in Eq. 6. To be more precise, the correct statement should be "there is no out-of-sample state in Eq. 6." The $s'$ could still be considered out-of-distribution if it is rarely reached in the trajectory of the dataset. In Eq. 5, since the $s'$ values are sampled using a dynamics model and penalized, their values are underestimated unless they are in or close enough to the dataset. This is due to the maximization of $E_{s\in D} [V]$ and the effect of implicit neural network continuity regularization. We acknowledge that Eq.6 is a comprise and practical implementation. In principle, the expectation should be taken as $E_{s\sim D, a \sim \pi, s' \sim \hat{P}}$, which however introduces the predicative reward $\hat{r}(s,a)$ (or $\hat{r}(s, a, s')$ in some tasks) that is hard to handle. Instead, since for $s \in D$ the $(s, a \sim \pi(\cdot|s))$ is almost in or close to D, we use Eq.6 as an approximatation. ## Q2 follow-up Given the assumption, both $d$ and $d_u$ are in support but not in the same distribution. > Q2 1: under the condition $supp d \subseteq suppd_u$,, Eq. 5 can never penalize the value for OOD states $s \notin supp~d_u$. Right, and it is just what we suppose to do. > Q2 2: Since both $d$ and $d_u$ are in distribution because of the assumption, the both theorems have no assurance for OOD states, whose value functions need to be underestimated the most. Besides, even in $supp~d_u$, some states may be highly overestimated. Under the definition of out-of-distribution and out-of-support, it is incorrect to say 'both $d$ and $d_u$ are in distribution', and 'the both theorems' do have 'assurance for OOD states' (but no such assurance on out-of-support states).
Summary: The paper proposes a method to tackle the overestimation of values in offline RL by focussing on state-values instead of state-action values and using in-data policy optimization techniques based on model-based RL. They propose an actor-critic variation for their approach and apply the method on various offline RL tasks. Strengths: 1. The proposed method is derivative of existing ideas such as conservative Q learning, which I see as a pro because it does not drastically depart from an already well-established algorithm. 2. I think there is appeal in the method in that it gives some sense of how results can be different when we start to incorporate state-based quantities instead of state-action based quantities, and when we include model-based approaches. The paper may be able to spark some interesting ideas for other papers. Weaknesses: 1. The paper should bold the results in their experiments sections. Its incredibly tedious to read and discern where CSVE performs well and where it doesnt. 2. There is little intuition for why state-based methods in this case can work better than state-action. I think given that the tweak is somewhat minimal, the paper should stress why this tweak can actually make things better than is typically done. 3. In terms of bounds, I don’t see a comparison to CQL? That is, there are V-function bounds computed by CSVE and true, but there aren’t V-function bounds computed implicitly by CQL (when CQL computes its Q-functions, and then using that to compute the V-function) and the true V-function. I think that is related to point 2 and it would be illuminating. 4. I find it a bit unsettling that the results of prior work in Table 1 were just copy-pasted here. It's not clear to me if it is an absolutely fair comparison since setups between various papers can be different (random seeds etc). It would make more sense to me to re-run the algorithms in the setup used in the paper. Moreover, the algorithms have been run only for 3 seeds, which is far too little since these are not image-based environments. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Questions 1. I don’t fully understand why model-based helps here. Aren’t the issues of OOD actions still relevant given that the transition model is a function of the action and the action is sampled from the evaluation policy? 2. Is this method scalable to when multiple policies generate the fixed dataset? And can we have a behavior policy-agnostic version where we don’t know what policies generated the data? 3. While this paper is different, there does seem to be some relation to [1]. In that paper, they learn the state density ratio and use that for better control. Can the authors comment on the difference between how this paper uses the state-density ratio vs. how [1] uses the state-density ratio (not the ratio learning part)? [1] Off-Policy Deep Reinforcement Learning by Bootstrapping the Covariate Shift. Gelada et al. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: See above. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the insightful and detailed comments. We respond to specific questions and comments as bellow. ## 1. Concerns Concern 1: Table is hard to read In response to your suggestion, we have revised the paper and included two modified tables in the pdf attached. In these tables, we have highlighted the scores that are larger than 90% of the largest score. Concern 2 : Why state-based method helps in our method? To clarify the contribution of our state-based method, we compare it with CQL and COMBO. As shown below, CQL underestimates state values point-wisely, while COMBO and CSVE underestimate state values on expectation. - CQL: $$ \mathbb E_{\pi(a|s)}[\hat{Q}^{\pi}(s, a)] \leq \mathbb E_{\pi(a|s)}[Q^{\pi}(s, a)], \forall s \in D $$ $$ \hat{V}^{\pi}(s) \leq V^\pi(s) , \forall s \in D $$ - COMBO: $$\mathbb E_{s \sim \mu_0, a \sim \pi(a|s)}[\hat{Q}^{\pi}(s, a)] \leq \mathbb E_{s \sim \mu_0, a \sim \pi(a|s)}[Q^{\pi}(s, a)]$$ $$\mathbb E_{s \sim \mu_0}[\hat{V}^\pi(s)] \leq \mathbb E_{s \sim \mu_0}[V^\pi(s)] $$ - CSVE: $$ \mathbb E_{s \sim d}[\hat{V}^{\pi}(s)] \leq \mathbb E_{s \sim d}[V^{\pi}(s)]$$ where $\mu_0$ in COMBO represents behavior policy, $\hat{Q}^{\pi}$ is the estimation of $Q^{\pi}$, and $d(s)$ in CSVE represents any state distribution. The motivation for using the state-based method can be summarized as follows: - Compared to CQL, both CSVE and COMBO aim to achieve better performance by relaxing the conservative estimation guarantee from point-wise state values to the expectation of state values. However, their conservative approaches differ: CSVE directly penalizes out-of-distribution (OOD) states, while CQL and COMBO penalize OOD state-action pairs. - In comparison with COMBO, by directly penalizing the OOD states, CSVE obtains the same lower bounds but under a more general state distribution. This offers a more flexible space for algorithm design, which is one of the main reasons for penalizing $V$ rather than $Q$. - By controlling the distance of $d$ to the behavior policy's discounted state distribution $d_u$, CSVE has the potential for further performance improvement. The bound is provided in the comparison with prior work in Section 3.1. We will refine this comparison in the future version. Concern 3: Questions about evaluation We understand your concerns regarding the comparison of results in Table 1. It is worth mentioning that copying results from prior work is a common practice in the offline literature; for instance, PBRL copy-pasting results from TD3-BC. However, we recognize the importance of ensuring a fair comparison across different setups. We have attempted to reproduce some of the results from previous work, but most of our reproduced results were inferior to those reported in the original studies. For example, we used the source code of COMBO to reproduce their results (Appendix C.4), but we could not achieve the same level of stability as presented in their paper. Therefore, we opted to use the results reported in the original papers for a fair comparison. Recently, we have conducted additional experiments with seven more seeds on HalfCheetah-medium, HalfCheetah-medium-replay, and HalfCheetah-medium-expert tasks. The updated scores can be found in the attached PDF. The results are consistent with the results reported in the paper. Given the limit on time and available computing resources, we can not reproduce all experiments in this rebuttal window. In the future revision, we will re-run all experiments over 10 seeds and report the results. ## 2. Questions Q1: issues of OOD actions CQL learns the conservative Q-values of $Q(s, a)$ on transitions $\{(s, a, r, s')\}^N$ of dataset $D$ where the value overestimation can only be introduced by OOD actions on $s'$. We aruge that **this is also a limitation of CQL that hinders further performance improvement**, since it does not learn Q-values on any out-of-sample state $s \notin D$ even the state is in-distribution with respect to the behaviour policy. In contrast to adding conservatisim on Q-values as CQL, **CSVE proposes the idea of imposing conservatism on state values and proves that this approach is better in theory.** The intuition is that proper exploration on out-of-sample but in-distribution states (i.e., the states near data as in our paper) has benefits on improving the learnt policy compared to the behaviour policy that collected the data. As a comparision, penalizing Q for OOD actions has no guarranttees on OOD states (in CQL), or require assumptions on a penalty coefficient(in COMBO). However, when directly penalizing V function, CSVE get the same lower bounds as COMBO but under more general state distribution (Section 3.1). This means for those OOD actions, we can still get lower bound its value by bounding the state value, thus decreasing the extrapolaation error. We will add this discussion in the updated version. Q2: scalability to dataset generated by multiple policies Our method is indeed scalable to situations where the fixed dataset is generated by multiple policies. Throughout the development of our algorithm, we did not assume that we have access to the policy that generated the dataset. Moreover, in our experimental evaluation, the medium-expert dataset serves as an example of a mixed dataset, as it is generated by both medium-played policy and expert policy. Q3: Relation to [1] [1] aims to learn an accurate value function $V{\pi}$, whereas in our work, we focus on obtaining a lower bound for $E_{s \sim d}[V{\pi}(s)]$. In [1], it is necessary to learn the state-density ratio to ensure a more precise estimation of $V{\pi}$. In contrast, our approach does not require learning this ratio. Lower bounding $E_{s \sim d}[V{\pi}(s)]$ allows us to avoid visiting out-of-distribution states when executing actions. We acknowledge that this paper is related to off-policy evaluation, and we will include it in the related work section of our updated version. --- Rebuttal Comment 1.1: Comment: Thank you to the authors for their response. Few things: - As for the table, I meant simply highlighting where your method does well (not every method that did better than some threshold) - Sorry, but I dont fully understand why the issues of OOD actions do not arise. Ultimately, if CSVE is relying on a model, then that involves sampling actions from some policy different from the behavior policy, which means it would still suffer from OOD overestimation (since model was approximated using dataset). Moreover, to have a policy improvement algorithm, we do need $Q$, which involves sampling from $\pi$ (in equation 6), which can result in overestimation again. Is it fair to say: your method still suffers from OOD actions but only on states that appear close to the dataset as opposed to suffering from OOD actions on bad OOD states? Clarification on the last point would be appreciated. Thank you. --- Reply to Comment 1.1.1: Comment: Thank you for your valuable feedback. In response to the table highlighting concerns, we recognize that this issue has been raised by three reviewers. As such, we have strived to strike a balance among these suggestions to present our results more effectively. Regarding the issue of out-of-distribution (OOD) actions, we acknowledge that our method is not entirely immune to OOD action overestimation. However, the impact of OOD actions should be mitigated in our approach. On the one hand, during the value estimation in Eq.5, all states are present in the dataset, while the policy $\pi$ is constrained to be close to the behavior policy (first term of Eq.9), allowing for slight exploration with high confidence of non-overestimation ensured by the model prediction of next states. On the other hand, during the policy learning in Eq.9, the additional action exploration (second term of Eq.9) is strictly applied to states in the dataset and only provides a bonus to actions that (1) themselves and their model-predictive next-states are both close to the dataset (ensured by the model) and (2) their values are favorable even with conservatism. We will revise our statement on OOD actions to provide better clarity. We hope this explanation addresses your concerns, and we appreciate your suggestions for improvement.
Summary: << I have read the authors' rebuttal and had raised my score based on the discussion >> The paper discusses challenges in Reinforcement Learning (RL), particularly in real applications where online learning from scratch is often risky and unfeasible. To address this, the authors introduce Conservative State Value Estimation (CSVE), a novel offline RL approach that deviates from traditional methods that estimate conservative values by penalizing the Q-function on Out-Of-Distribution (OOD) states or actions. Instead, CSVE penalizes the V-function directly on OOD states. The authors present theoretical evidence that CSVE provides tighter bounds on true state values than Conservative Q-Learning (CQL), with similar bounds as COMBO but under more general discounted state distributions. This allows for potentially more effective policy optimization within the data support. The primary contributions of the paper include the proposal and theoretical analysis of conservative state value estimation, the introduction of a practical actor-critic algorithm applying CSVE, and an experimental evaluation demonstrating superior performance of CSVE over prior methods based on conservative Q-value estimation in tasks of Gym and Adroit in D4RL benchmarks. The simplicity of the proposed changes augments their potential for practical adoption in the field. Strengths: Clarity: The manuscript is effectively composed, offering straightforward comprehension. However, there's room for improvement in the explanation of certain equations. By simplifying these complex components, the reader's cognitive load could be significantly reduced. Technical Soundness: The theoretical foundations of the paper are solid. The authors offer convincing theoretical derivations to support the proposed approach, contributing to the technical soundness of the paper. Originality: Overestimation of action values represents a recurring challenge in the offline reinforcement learning landscape. The authors innovatively address this issue by learning a conservative estimate of state values and penalizing OOD states, offering a potentially tighter bound to the actual state value function. In my view, this represents a novel contribution to the field. Significance: Attaining a tighter bound on the value estimate can considerably enhance performance in offline RL problems. Additionally, the simplicity of the proposed approach augments its potential for widespread adoption and application. Weaknesses: W1: The experimental results primarily focus on a range of continuous control tasks, neglecting discrete action space problems. Furthermore, the decision to use only three seeds for performance comparison appears limited, especially considering the growing trend to use ten seeds to apply metrics such as the Interquartile Mean (IQM) [1]. This raises concerns about the statistical validity of the proposed approach. W2: The observed performance gains from implementing CSVE do not appear consistently significant across different domains or discernible patterns. The sporadic nature of these gains may question the efficacy of the proposed approach in diverse applications. [1] https://arxiv.org/abs/2108.13264 Technical Quality: 3 good Clarity: 3 good Questions for Authors: Q1: Assume we express Q(s,a) as R(s,a) + V(s') and learn a world model for one-step transitions rather than predicting Q(s,a) directly. In this case, minimizing the CQL objective in equation 1 would implicitly penalize the state value of OOD states. Could the authors explain how this approach differs from the one proposed in this paper? Q2: What implications would an inaccurate world model have, specifically when the predicted next state $\hat{s'}$ doesn't match the actual next state ${s'}$? Q3. Could the authors provide experiment results with more seeds? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors did not discuss the limitations of their work. No discussion needed regarding potential negative societal impact. Flag For Ethics Review: ['No ethics review needed.', 'Ethics review needed: Compliance (e.g., GDPR, copyright, license, terms of use)'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: W1: neglecting discrete action space problems and only three seeds for performance appears limited Recently, we have conducted additional experiments with seven more seeds on HalfCheetah-medium, HalfCheetah-medium-replay, and HalfCheetah-medium-expert tasks. The updated scores can be found in the attached PDF. The results are consistent with the results reported in the paper. Given the limit on time and available computing resources, we can not reproduce all experiments in this rebuttal window. In the future revision, we will re-run all experiments over 10 seeds and report the results. Though discrete action is excluded in our analysis, In future work, we aim to include a wider range of tasks to better demonstrate the effectiveness and versatility of the proposed approach. W2: The observed performance gains from implementing CSVE do not appear consistently significant . We have updated Table 1 and Table 2 in the attached PDF for a better illustration of the performance gains of our method. Considering the average score, our method is on par with PBRL in the gym domain and better than PBRL in the adroit domain. We agree with the reviewer that our method is not consistently significantly better than other methods, but overall it outperforms them. Q1: Minimizing the CQL objective in equation 1 would implicitly penalize the state value of OOD states When we express Q(s,a) as R(s,a) + V(s') and minimize the CQL objective in equation 1 would indeed penalize the state value of OOD states. However, CQL underestimates state values point-wisely, while CSVE underestimate state values on expectation. - CQL: $$ \mathbb E_{\pi(a|s)}[\hat{Q}^{\pi}(s, a)] \leq \mathbb E_{\pi(a|s)}[Q^{\pi}(s, a)], \forall s \in D $$ $$ \hat{V}^{\pi}(s) \leq V^\pi(s) , \forall s \in D $$ - CSVE: $$ \mathbb E_{s \sim d}[\hat{V}^{\pi}(s)] \leq \mathbb E_{s \sim d}[V^{\pi}(s)]$$ CQL learns the conservative Q-values of $Q(s, a)$ on transitions $\{(s, a, r, s')\}^N$ of dataset $D$ where the value overestimation can only be introduced by OOD actions on $s'$. We aruge that **this is also a limitation of CQL that hinders further performance improvement**, since it does not learn Q-values on any out-of-sample state $s \notin D$ even the state is in-distribution with respect to the behaviour policy. The intuition is that proper exploration on out-of-sample but in-distribution states (i.e., the states near data as in our paper) has benefits on improving the learnt policy compared to the behaviour policy that collected the data. As a comparision, penalizing Q for OOD actions has no guarranttees on OOD states (in CQL). However, when directly penalizing V function, CSVE get the same lower bounds as COMBO but under more general state distribution (Section 3.1). This means for those OOD actions, we can still get lower bound its value by bounding the state value, thus decreasing the extrapolaation error. We will refine our discussion of this comparison in the updated version. Q2: What implications would an inaccurate world model have With an inaccurate world model, the theoretical part of our method will not be affected, given that the whole deduction does not require a model. However, the optimization part of Equation 9 will be affected. An inaccurate dynamics model may guide the policy to produce erroneous actions that result in high imaginary states. In the experiments, we find that the effect of model bias on RL performance is subtle in the medium tasks. We include experiments regarding different model errors in Appendix B. We use the average L2 error on transition prediction as a surrogate for model biases. Specifically, for the HalfCheetah task, there is no observable impact of model errors on scores, while in the Hopper and Walker2D tasks, there is only a slight decrease in scores as the errors increase. Q3. Could the authors provide experiment results with more seeds? We provide results over 10 seeds of three tasks in the pdf attached. The results are consistent with the results reported in the paper. We will re-run all experiments over 10 seeds and report the results. --- Rebuttal Comment 1.1: Comment: Thank you for addressing my previous concerns and providing additional results on the Cheetah problem using more seeds. I believe expanding this approach by using multiple seeds for other problems will further enhance the robustness of the results. I have a few more questions based on the provided feedback: Q4: From the presented results, it appears that CSVE outperforms the baselines when the data comes from an expert source. However, its performance seems to diminish when the data comes from a random or medium-quality source. Could you discuss why the quality of the data source seems to have a more impact on CSVE's performance compared to other baselines? Q5: I would appreciate further clarification on how the policy is derived using the conservative value function. In particular, how is the Q-function learnt or extracted, and what significance does the "conservative" value estimate hold within the context of AWR? --- Reply to Comment 1.1.1: Comment: Thanks for the valuable suggestions! We shall test with more seeds on other tasks as well. ## follow-up on Q4 Not exactly. In fact, for mujoco tasks, CSVE has more advantage on datasets from random to medium types, while on expert datasets all algorithms perform already well and CSVE performs only better slightly or in parallel with others. For adroit tasks: (1) on 'human' and 'cloned' datasets, all algorithms fail in 3/4 tasks, while CSVE performs significantly better in the other 1/4 tasks; (2) on 'expert' datasets, all algorithms can work reasonably, while CSVE performs better in some tasks and in parallel with any baseline in other tasks. Thus, we think data quality has a significant impact on the absolute performance of all algorithms, and compared to baselines CSVE indeed has more advantage on datasets from random to medium types than expert datasets. ## follow-up on Q5 The policy is derived by solving Eq.9, which balances the in-sample learning (the first term) and exploration based on conservative value estimation (the second term). - The Q function is learnt via Eq.6 during the value estimation phase. Since the AWR (the first term $L_{\pi}$ in Eq.9, defined in section 4.2) procedure is carried out only on $(s,a)$ pairs in the dataset, the significance of 'conservative' value estimation hold automatically. - Besides, the exploration term is carefully assured safe via the 'conservative' value estimation on predicative next-states.
Summary: In this paper, the authors propose an offline RL algorithm for conservative state value estimation (CSVE). This work is different than prior works that learn conservative state-action values like CQL or COMBO, in that it penalizes OOD states rather than OOD state-actions. The authors show that CSVE gets similar theoretical lower bounds to COMBO, but under more general state distributions. The practical version of the algorithm is similar to COMBO with the following key differences: * CSVE learns a NN value function and penalizes OOD states (rather than state-actions for the Q function) * the policy is updated using AWR rather than SAC * the learned dynamics models are used for 1-step rollouts to generate fictitious state transitions for the OOD state penalty * the learned dynamics model is additionally used to allow the policy to explore local 1-step transitions around the data The authors evaluate the CSVE method on the Mujoco and Adroit tasks from the D4RL benchmark. Generally, they find that their method outperforms or matches the relevant baselines (CQL, COMBO, AWAC) on these tasks. Strengths: To the best of my knowledge, the proposed CSVE algorithm is novel and is an interesting alternative to COMBO. The authors present good theoretical and empirical results illustrating the potential benefits of CSVE over COMBO and CQL. Thus, I believe that further exploration of whether incorporating the conservative penalty into a state-action value Q function or the state value function in a wider range of settings is an interesting direction for future research. Thus, I believe this work provides a worthwhile contribution to the research field. Weaknesses: There are a lot of small syntactical and word choice errors that do distract the reader and should be addressed in future versions, but generally the ideas presented in the paper are still comprehensible and well organized. It is unclear to me exactly how the "Model-based Exploration on Near States" is performed and how Equation 9 is being optimized. Specifically, it is unclear to me whether the 2nd term in Equation 9 is being optimized with some variant of DPG (like TD3 or SAC) with the gradient being taken through the learned dynamics model, or by some variant of AWR like with the rest of the algorithm. In my opinion, this design choice is quite significant and should be fully explained in the main body of the paper. Additionally, the tradeoff between the 2 terms in Equation 9 seems important and I would appreciate the ablation study over the different values for $\lambda$ to be in the main body of the paper. The Experiments section is a bit hard to follow, mostly because Tables 1 and 2 are hard to interpret without assistance. I think the clarity of the results would greatly be improved by bolding the top scores, and including aggregate scores like is done in many prior offline RL works. This would make it much easier to interpret how CSVE compares to priors algorithms. Finally, the current ablation study in section 5.2 needs more analysis otherwise it doesn't seem to add much to the main body of the paper. I think an ablation with accompanying analysis on varying $\lambda$ and $\tau$ would be more interesting to include in the main body of the paper as those parameters seem in my opinion to be more related to the novel components presented in this work. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: Are there any environments or data regimes where you expect to see a bigger improvement for CSVE relative to prior methods? The current results are a bit underwhelming considering that many of the D4RL tasks are pretty saturated. Perhaps evaluating on the half-cheetah jump or ant angle tasks that have been tackled in other offline MB RL approaches like COMBO could lead to a stronger result. Considering that you only require 1-step predictions from your dynamics model, you could also potentially test on environments with more complicated observations like images. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: No obvious limitation. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the insightful and detailed comments. We are glad to hear that Reviewer EMsr believes that our work presents good theoretical and empirical results illustrating the potential benefits of CSVE. We respond to specific questions and comments below. ## 1. Main concerns Concern 1: A lot of small syntactical error and word choice errors Thank you for pointing that out. We appreciate your feedback on the readability of our paper and recognize that there are some syntactical and word choice errors that might be distracting. In future revisions, we will give more attention to these issues, ensuring that the text is polished and easier to read. Concern 2: how the "Model-based Exploration on Near States" is performed The optimization of the second term in Equation 9 involves calculating the gradient through the learned dynamics model. This is achieved by employing analytic gradients through the learned dynamics to maximize the value estimates. It is important to note that the value estimates rely on the reward and value predictions, which are dependent on the imagined states and actions. As all these steps are implemented using neural networks, the gradient is analytically computed using stochastic back-propagation, a concept inspired by Dreamer[1]. Additionally, the detailed implementation can be found in the accompanying code provided with our paper. We agreed that this discussion is important and will include this in our updated version. Concern 3: The Experiments section is a bit hard to follow In response to your suggestion, we have revised the paper and included two updated tables in the attached PDF. In these tables, we have highlighted the scores that exceed 90% of the highest score. The average score is also provided for a more comprehensive comparison. Concern 4: More ablation should be added into the main body We've included an ablation study about different $\lambda$ in Appendix B. And according to your follow-up suggestion, we will place the ablation of $\lambda$ in our main text and move the current ablation study of $\beta$ to the appendix. ## 2. Questions > Q1: Any environments or data regimes that we expect to see a bigger improvement? We fully agree that it is valuable for further exploration of more applications. We check the halfcheetah-jump and ant-angle used in MOPO and COMBO. However, we run into some reproduction difficulty using the source code of COMBO(Appendix C.4). In future work, we would indeed love to test CSVE on tasks like Half-Cheetah Jump, Ant Angle, and environments with more complex observations such as images. This will allow us to better assess the method's potential and capabilities across a broader range of applications and compare it with existing approaches. In general, CSVE have advantage in the broader set of scenarios where the environment is well-defined real-world physical system and the behaviour policy is not optimal. Compared to IQL and CQL, CSVE has less value underestimation and better exploration on state-action pairs near the dataset. References: [1]Dream to control: Learning behaviors by latent imagination, ICLR 2020. --- Rebuttal Comment 1.1: Comment: Thank you to the authors for their response. If they indeed follow through with the changes mentioned, then I would raise my score. --- Reply to Comment 1.1.1: Comment: Thanks! We sincerely appreciate the valuable suggestions and shall improve the paper accordingly.
Rebuttal 1: Rebuttal: We have included two updated tables in the attached PDF. In these tables, we have highlighted the scores that exceed 90% of the highest score. The average score is also provided for a more comprehensive comparison. We have conducted additional experiments with seven more seeds on HalfCheetah-medium, HalfCheetah-medium-replay, and HalfCheetah-medium-expert tasks. The updated scores can be found in the attached PDF. The results are consistent with the results reported in the paper. Given the limit on time and available computing resources, we can not reproduce all experiments in this rebuttal window. In the future revision, we will re-run all experiments over 10 seeds and report the results. Pdf: /pdf/5addb16e2dd9bbff6b84e150d768329abc1e0fe7.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Differentially Private Statistical Inference through $\beta$-Divergence One Posterior Sampling
Accept (poster)
Summary: This paper combines OPS (one posterior sampling) with bounded beta-divergence, resulting in a DP mechanism that applies to a general class of inference models. This approach bounds the sensitivity of the procedure without bounding the feature space or statistical functionals thereby improving on previous approaches. The paper provides empirical evidence for the performance of the approach against relevant baselines and on multiple different data sets. Strengths: The paper is well motivated, I agree that overcoming the limitations of sensitivity bounding is an important direction for work on differentially private inference. The theoretical analysis is careful and precise. Table 1 is very helpful for baselining the approach and clarifying its novelty. Weaknesses: The presentation of section 3 is extremely dense and jargon-y, with very little intuition given for the key formulas (e.g. equation 4). The paper would be significantly easier to follow to those unfamiliar with this class of Bayesian inference algorithms (which are fairly esoteric as far as I gather) if some additional intuition was included. Figure 2 is very hard to read given the amount of information being conveyed and the size (e.g. the red and orange colors are similar, what is PM?). From looking at the figure, it is also not obvious to me that the proposed method is better. Perhaps extrapolating the line out in n, that would be the case, but then these experiments should be run with larger sample sizes. If the reason for not doing so is the computational infeasibility then this should be noted as a limitation of the method. The description of the experiments was lacking clarity in my opinion. What is the prediction goal of training these models? what information is contained in these datasets? how many covariates were used? how/why were hyper-parameters chosen? These details should be included in the main body of the paper. I would be more convinced if the experiment could show that the proposed approach replicated a finding without DP with lower error than alternatives. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: How can this approach be used for uncertainty quantification over the parameters or posterior predictive? In approaches where the exact posterior is targeted, this is obvious but it is less clear with this approach. If this requires additional privacy budget, this should be noted. Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: The authors should address the computational demandingness of this approach compared to alternative approaches. How large of a model/dataset could this realistically be run on? Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for acknowledging the importance of our contribution to overcoming the limitations of sensitivity bounding. We address your questions and concerns below. ### Improving the presentation of Sec. 3 and providing more intuition The most important points for readers unfamiliar with Bayesian and general Bayesian inference to understand are: + The Gibbs posterior used by Wang [80] and Minami [65] generalises the standard Bayesian posterior to $\pi(\theta|D) \propto \pi(\theta)\exp(\sum_{i=1}^n w\log f(D_i; \theta))$. The parameter $w > 0$ provides some flexibility in adapting to the sensitivity of the posterior. However, bounding the sensitivity of $\log f(D_i; \theta)$ is difficult outside very simple problems. + The generalised Bayesian posterior, Bissiri [15], generalises this further to $\pi(\theta|D) \propto \pi(\theta)\exp(-\sum_{i=1}^n \ell(D_i; \theta))$ (Eq. 3) providing the flexibility to choose a loss with bounded sensitivity. + We chose the beta-divergence loss (Eq. 4), which still depends on the model $f(\cdot;\theta)$, and allows data generating parameters to be learned unbiasedly, while also having bounded sensitivity. For improving clarity and intuition, we propose the following changes: +ll 123 "Note $w = 1$ recovers the standard posterior and $w\neq 1$ provides flexibility to adapt the posterior the level of privacy required.’’ + ll 153 "OPS has struggled as a general purpose tool for DP estimation as bounding the sensitivity of $\log f(x;\theta)$ is difficult.’’. + ll 155 “high posterior density is assigns to parameters that achieved small loss on the data.’’ + ll 156 "The Gibbs posterior in (2) is recovered using the weighted negative log-likelihood … and the standard posterior for $w=1$.’’ + ll 158 "The framework of Bissiri et al. [15] provides the flexibility to choose a loss function with bounded sensitivity. An alternative loss function…’’ + ll 161 "the first term in (4) contains the negative likelihood, therefore parameters that make the data likely achieve low loss. However, it is raised to the power $\beta-1 > 1$ prescribing relatively smaller loss to observations unlikely under that parameter than the log-likelihood. A key feature of (4) is that while $lim_{f\rightarrow 0} -\log f = \infty$, $lim_{f\rightarrow 0} -\frac{f^{\beta-1}}{\beta-1} = 0$ for $\beta > 1$. The second integral term only depends on the parameters and ensures the $\beta$D loss can learn the data generating parameters.’’ + ll 163 "$\beta = 1$ recovers the negative log-likelihood.’’ + ll 180-183 “Rather than bounding $log f(\cdot;\theta)$, we replace it in (3) with the $\beta$D loss from (4) which is naturally bounded when the density is bounded.’’ We will also use more subsections to improve the readability. ### Improving the description of the experiments Thank you. Appendix B will be updated to contain the following information for each dataset/experiment + What was the response + What were the predictors + Specifications of priors, regularisers and optimisation parameters Most hyperparameters resulted directly from the DP requirement. ### Extensions to uncertainty quantification In this work, we are not advocating Bayesianism as the correct paradigm for inference, but as a convenient way to release DP parameter estimates as an alternative to adding noise to the empirical risk minimizer. We demonstrate improved performance and flexibility. If uncertainty quantification is required, the privacy budget can be split and more than one sample can be released from a posterior with higher $\beta$. Interesting further work could look at the utility/privacy trade-off here. To Sec. 6 ll 369 we will add "Extensions of this work could consider the benefits of dividing the privacy budget to allow for the release of more than one sample paving the way for parameter inference as well as estimation.’’ ### Addressing the computational demandingness of this approach compared to alternative approaches In Sec. 4 and 6 we identify the limitation that our approach relies on a perfect sample from the posterior. This is shared by the Gibbs OPS approaches of Wang [80] and Minami [65], but not by the approaches of Chaudhuri [19] or DPSGD [1]. These rely on techniques from optimisation that are known to scale better in the number of parameters. Obtaining posterior samples is left for future work. In Sec. 4 we discuss how the $\beta$D-Bayes posterior is naturally suitable for application to the DP-MCMC literature. A great deal of research has gone into scaling MCMC methods for logistic regression and there are tailored MCMC methods for neural networks as well. Finally, these procedures cannot be repeated, as repeated estimation would leak information, and only require one posterior sample (after reaching stationarity). We hope that the considerable improvements in performance make the increased computational costs worthwhile. Our experiments show that $\beta$D-Bayes OPS outperforms Chaudhuri [19] and DPSGD [1] as well as the Gibbs OPS methods. We hope that our method encourages further research into computational procedures for these models, including scaling MCMC to large neural networks or alternatively DP variational Bayes methods to approximate the $\beta$D-Bayes posterior [41, 45, 70]. To Sec. 6 l. 377 we will add: > A further limitation of OPS is the computational burden required to produce posterior samples, particularly in larger neural networks with many parameters. However, we argue that the improved performance justified such a cost and is mitigated by the fact that such inference can only be run once to avoid leaking privacy and that only one posterior sample is required. We hope that the performance of our method encourages further research that can tackle these computational challenges, including scaling MCMC to large neural networks or developing DP variational inference approaches [41, 45, 70] to the $\beta$D-Bayes posterior.
Summary: In this paper, authors propose a modified version of one posterior sample (OPS) mechanism. The OPS mechanism releases a sample from a posterior distribution in which the likelihood is tempered based on the privacy guarantee. This can be shown to be a simple instance of exponential mechanism which provides the privacy guarantees. Instead of tempering the likelihood (or equivalently scaling the sum of log-probabilities and taking the exp), authors use the $\beta$-divergence loss as the utility loss for the exponential mechanism. Compared to basic OPS this approach has the benefit that the sensitivities of the log-probabilities don't need to be bounded. Instead using EM with $\beta$-divergence loss, requires the pdf's of the model to be bounded from above (the lower limit is trivially 0). Such a bound exists naturally for almost every distribution (except for something like Dirac's delta dist.), while the log-pdf's typically don't have such bound. Authors further note, that the $\beta$-divergence loss can also be used in other DP Bayesian inference methods. The experimental results demonstrate that the proposed method is able to outperform some previous works in DP logistic regression for various data sets, as well as DP-SGD in learning a simple NN for some $\epsilon$'s and number of observations. Strengths: The unbounded sensitivity of the log-probabilities is a challenge in all DP probabilistic inference methods. Since the log-probabilities don't typically have a lower bound, the previous approaches typically resort into clipping or otherwise bounding the log-probabilities to obtain bounded sensitivity for the perturbation. The proposed solution overcomes this challenge by using probabilities instead of log-probabilities, thus avoiding any possible clipping bias. Therefore this attempt is an interesting contribution to the DP probabilistic inference. Weaknesses: The limitations of the method could be further discusses. For example, there is a reason why probabilistic inference is typically implemented using log-probabilities. Using the pdf's in the Bayesian updates would often lead to numerical issues, where as the log-probabilities are typically more stable. Also, since OPS produces only a single sample of the posterior, it could be far away from the posterior mean. And if you would be interested in Bayesian inference, a single sample would not be too interesting as it contains no information on the actual uncertainty. The experimental results have some clarity issues. For example the lines in Fig 2 seem bit inconsistent between the panes. I will include some questions about these into the questions. Also, I think comparing the proposed approach to any of the OPS works would be a relevant comparison which is currently missing from the paper. After rebuttal: Authors have sufficiently addressed my concerns in their rebuttal. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: - Is the convergence of the STAN run for the $\beta$D-Bayes somehow included in the privacy parameters? You do discuss this in Section 4, but I don't see any $\delta$'s reported in the experiments. - Also, while the STAN gives you the warnings, aren't the tests for the geometric ergodicity there only statistical estimates? Wouldn't this give a further error probability that should be taken into account for the privacy analysis? - Since you use the pdf's instead of the log-pdf's in the $\beta$D-Bayes, I wonder how will it affect the numerical stability of the inference. Do you need to adjust the precision somehow to compensate this, or will it not be a problem at all? Having some discussion about this would seem appropriate. - Section 5: You say "Note that [19] still presents the state-of-the-art in logistic regression [40].". Is this really so? I checked the [40] and I cannot see them suggesting this. I would be actually really surprised if the DP-SGD with all the modern accounting machinery would not out-perform the logistic regression method of [19]. - Fig 2: the linestyles in the legend look odd. In the top most figure, the green line is dashed, which is not the case for the two lower ones. Furthermore, the shade of green looks bit different between the first row and the rest. Also, it is bit had to say which gray line is which (the \beta D-Bayes or LogReg (1)). I would encourage the authors to make the line styles in this figure consistent and more distinct from each other. - Does the PM abbreviation stand for posterior mean in Figs 2 and 3? - the SGD in Fig 3, is it just non-DP SGD? - If so, I'm really confused why it performs so poorly in the experiments. Why would you obtain so much smaller loss from the $\beta$D-Bayes approach? I guess both should be optimizing at least similar likelihood function, so the main difference would arise from the prior, which then should have a more limited effect as the number of observations increase. References are as in the paper: [19] Kamalika Chaudhuri, Claire Monteleoni, and Anand D Sarwate. Differentially private empirical risk minimization. Journal of Machine Learning Research, 12(3), 2011. [40] Naoise Holohan, Stefano Braghin, Pól Mac Aonghusa, and Killian Levacher. Diffprivlib: the ibm differential privacy library. arXiv preprint arXiv:1907.02444, 2019. Minor questions/comments/suggestions: - Denoting the data set with the same variable as the variable in the integral in (4) looks bit odd. Of course it's technically fine, but you might want to consider changing it. - line 157: should you have a $-\log f(D; \theta)$ instead of $\log f(D; \theta)$? - a minor comment: "and the data can always be transformed–without changing the mean estimation–to avoid a very small variance.". Can we really _always_ do this without accessing the data? I would imagine that in order to bound the small variance, you would first need to inspect how small the variance is, and then based on that upscale the data. Now, depending on the size of the data, the variance could be affected by a single neighbouring sample, and hence I would imagine you would need to take this into account in the privacy analysis. - typo, line 265: "Bayescan" - typo, appendix, proof for Lemma 1: additional $)$ in the second line Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: There are some limitations that I raised in the weaknesses which should be further discussed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for acknowledging our contribution to DP probabilistic inference and the potential of moving away from log-score updating. We address your questions and concerns specifically below. ### Numerical stability of using pdf's instead of the log-pdf's The $\beta$D loss *sums* the p.d.f.s to a power rather than multiplying them (Eq. 3 and 4) and is therefore not subject to numerical issues. ### Utility of single sample from the posterior We agree that when Bayesian inference is the goal, one sample is of little use. In this work, however, we are not advocating Bayesianism as the correct paradigm for inference, but as a convenient way to release DP parameter estimates as an alternative to noising empirical risk minimizers. One sample from the posterior is an unbiased and consistent estimate of the posterior mean and our experiments show that $\beta$D-Bayes improves estimation performance and extends applicability compared to current methods If uncertainty quantification is required, the privacy budget can be split and more than one sample can be released from a posterior with higher $\beta$. Interesting further work could look at the utility/privacy trade-off here. In Sec. 6 ll 369 we will add ‘’Extensions of this work could consider the benefits of dividing the privacy budget to allow for the release of more than one sample paving the way for parameter inference as well as estimation’’. ### Distance of single sample from the posterior mean Ideally, one would release the posterior mean but randomness of the estimate is required in order to provide differential privacy as per its definition. Methods that noise MLEs (e.g. Chaudhuri [19]) also release one sample that could also be far from the original MLE. Our experiments, e.g. Fig 2 and 3 show that a sample from the $\beta$D-Bayes posterior is closer on average to the data generating parameters than the noised MLE in [19] or a sample from the Gibbs posterior [65] for the same privacy level, so the cost of ensuring privacy is less under $\beta$D Bayes OPS. ### Favourable comparison to other OPS works already included Fig 2 compares our $\beta$D-Bayes OPS with Minami [65] who do Gibbs posterior OPS and whose method is an improvement of Wang [80] for logistic regression. We show that $\beta$D-Bayes OPS generally improves the performance of Minami [65]. For neural network classification and regression we are not aware of anyOPS methods that apply and therefore we compare to DPSGD instead. We believe one of our key contributions is providing a setting in which OPS can be extended to a wider class of models. ### SGD in Fig 3 is non-DP SGD SGD is run with the same hyperparameters as DPSGD but without clipping and noising of gradients. We will explicitly mention this. ### Explaining the performance of non-DP SGD in the experiments The goal of this experiment was the comparison of the private methods. For SGD we chose the same parameters as for DPSGD which includes a small number of epochs. While non-private performance could believably be improved with other hyperparameters, we included SGD as an ablation of DPSGD. We will explain this further in the appendix and Fig 3 no longer compares $\beta$D-Bayes with SGD in the non-private setting. ### Utility of stan warnings despite them being statistical estimates Theorem 1 proves that a sample from the exact posterior is DP, and following Minami et al. [65] and Wang et al. [80], we assume that a sample from a chain after convergence is representative of a sample from the posterior. We use the absence of stan warnings to justify this. While, as you point out, these diagnostics are only estimates, they are widely adopted measures to assess MCMC convergence outside of the privacy setting. It is reasonable to assume that the data holder can choose a large enough number of steps to obtain a posterior sample without the occurrence of such stan warnings Thank you for pointing out that these diagnostics are missing from the paper. We will report the ESS and R-hat scores in the appendix the number of warm-up steps and iterations used. We specify on ll 269 how the properties of $\beta$D-Bayes make it suitable for applications of DP-MCMC, where the whole MCMC chain is made private, but we leave investigating this for future work. We will further elaborate on the limitation of stan warnings in Sec. 6. ### Current relevance of [19] Thank you for checking. [40] is an example use case of [19] (DP logistic regression proposed by Chaudhuri et al.) in 2019 suggesting that [19] was not outdated then. Further, [19] is designed specifically for logistic regression allowing for the use of 2nd order optimisation techniques and privatising the final converged logistic regression parameters. DPSGD on the other hand privatises each gradient step and can only be run for a limited number of iterations, thus introducing more noise for the same level of privacy. We re-formulate this sentence to reflect that “[19] still presents a widely-used implementation of DP logistic regression” We compare $\beta$D-Bayes OPS with DPSGD in our neural network classification and regression examples (Fig 3 and 6). ### ‘’[T]he data can always be transformed…to avoid a very small variance." We agree the original comment was vague, and yes, you do not want to scale by something that depends on the data. A strategy that can always be implemented is to add $0$ mean Gaussian noise with variance $s_0^2$ to the responses $y$. Then you can trivially bound the response variance of the linear regression above $s_0^2$. To ll 235 we add: > In situations where a natural lower bound is not available, one can guarantee this bound by adding iid 0 mean Gaussian noise with variance $s_0^2$ to the observed responses $y$. This is conceptually different from noising parameter estimate. This repeatedly adds 0 mean noise with a very small variance (we used $s_0^2 = 0.01$) to every observation and doesn’t affect the regression parameter estimates. --- Rebuttal Comment 1.1: Comment: Thank you for your detailed response! Your response addresses most of my concerns and I will raise my score. That being said, there are couple of things I would encourage the authors to include in the final version of this paper. 1. Explicitly mention in your experimental section that the PM stands for the posterior mean, which (I guess) is not DP but serves as the baseline for the proposed method. 2. Explain how the hyperparameters for the DP-SGD comparison were selected. As the paper is not advocating Bayesianism as inference method, it is important that the comparison to the DP-SGD is as fair as possible. If the hyperaparameters are chosen suboptimally, then the DP-SGD might struggle unnecessarily. --- Reply to Comment 1.1.1: Comment: We are pleased our responses address your concerns and thank you for your careful consideration. We will ensure the final version of this paper addresses your two points: 1) We will explicitly mention that the posterior mean (PM) is the point estimate one would ideally release if privacy were not an issue i.e. a non-private baseline. 2) The DP-SGD parameters were set with hyperparameter tuning to make the baseline as strong as possible. We will explicitly mention this and what the values are in the experimental section.
Summary: Maintaining differential privacy in a Bayesian setting, can be implemented using Gibbs posterior sampling, which can be viewed as the exponential mechanism with respect to the score function, defined by the the sum of the log prior probability and the log likelihood of the dataset multiplied by a factor governed by the privacy parameter. To ensure differential privacy, this factor must scale with the global sensitivity of the log likelihood, which is unbounded in the general case. The authors of this paper propose $\beta$D-Bayes, a DP mechanism based on an alternative score function, which is defined by the $\beta$ divergence. This loss function is bounded whenever the underlying distribution is bounded, a much more reasonable assumption. While this method is no longer justified by the theoretical Bayesian framework, it is still consistent under certain assumptions. While this probability function is intractable in the general case, it can be approximated using MCMC techniques, which are proven to guarantee privacy under additional assumptions. They continue to evaluate numerically and empirically the quality of this new technique. Strengths: The posterior sampling technique is an important tool in providing private parameter estimations in a Bayesian setting. This techniques has two main drawbacks. The first is the infinite global sensitivity of the score function in the general case, and the second is the intractability of the posterior distribution. This paper provides an important method that reliefs the first issue. Weaknesses: It might result from my limited knowledge in statistical tools, but I found it hard to parse many of the claims presented in the paper. Presentation: In many cases, the paper avoids providing full definitions and conditions for the stated claims, and instead refers the reader to other papers or the appendix. The conditions in Theorem 2 and Proposition 2, and the discussion at the first paragraph of section 4, all refer to conditions stated in other papers, and the notation used to state Proposition 1 were defined only in the Appendix, which I found very challenging to follow. Hard to parse informal claims: While the formal claims were clearly stated, some of the results stated in the introduction are not fully proven, but are based on discussions in sections 3 and 4. While these short discussions might be sufficient for experts in the field, I found it hard to follow. In particular, I did not understand the first paragraph in page 6 which discusses implementation to NNs, and the first paragraph in page 7 which discusses the computational guarantees (as far as I can tell, there are none for $\beta$D-Bayes). Minor comments: * In equation 4, the letter $D$ is used twice, once as an argument of the function, and once as the integrated term, which is confusing. * In Theorem 2, $\theta_{0}^{(\beta)}$ was not defined. Is it possible it is a shorthand for $\theta_{0}^{\ell^{(\beta)}}$? * In Figure 3, the choice of colors makes it hard to distinguish between $\beta$D-Bayes (PM) and SGD. ======= **Edit after rebuttal discussion:** As my main concerns were presentation related, I hope the authors will update their work in accordance with the additional explanations they added in the rebuttal. In particular, in my opinion, the format of the statements should be edited, so that the paper will become self contained. Technical Quality: 3 good Clarity: 1 poor Questions for Authors: As mentioned in the previous section, I had a hard time to parse many of the claims, especially those which were not formally stated. The authors input will be appreciated. Minor question: In line 52, I failed to understand why the OPS method was presented as an alternative to the sensitivity method, while as the authors explain in later parts, it is actually an implementation of the exponential mechanism with the appropriate sensitivity function? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 1 poor Contribution: 2 fair Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for acknowledging the importance that moving away from log-score updating contributes to DP probabilistic inference. We address your questions and concerns specifically below. ### Clarifying Thm. 2 and Prop. 2, the first paragraph of Sec. 4 and Prop. 1 Space in the paper is limited and we decided to spend what we have explaining the concepts. We propose the following changes to improve this section’s clarity: We can discuss the conditions of [64] in relation to the $\beta$D-Bayes posterior before the proofs of Thm. 2 in the appendix. Condition 2 required for Prop. 2 is provided in the appendix before the proof of the proposition. We will directly refer to this in the statement of Prop. 2. A formal statement of Prop. 12 of Minami [65] and their conditions will also be added to the appendix and correctly referred to in the paper. From Prop. 1, $H^{(\beta)}_0$ is the Hessian matrix and $K^{(\beta)}_0$ the cross-product matrix of the gradient vector, we keep their formal definitions in the appendix, but add the following sentence to Prop. 1: “where $K^{(\beta)}_0$ and $H^{(\beta)}_0$ are the gradient cross-product and Hessian matrices for the $\beta$D loss and are defined in appendix Eq. …’’ We will add a subsection to Appendix A providing formal definitions of all notation. ### Clarifying the first paragraph in page 6 discussing the implementation to NNs The first paragraph on page 6 points out that NNs have been shown to outperform logistic regression for modern problems. However, the log-likelihood of a NN classifier is not convex, and therefore the methods of Minami [65] and Chaudhuri [19] cannot be applied. Instead, the state-of-the-art method for DP estimation of NNs is DPSGD which adds noise to minibatch gradient evaluations in SGD and clips the gradients at some value to bound their sensitivity. We agree our previous statement on ll 222 was imprecise and we will replace this statement with: “unlike logistic regression, the log-likelihood of a neural network classifier is not convex and therefore the methods of Minami et al. [65] and Chaudhuri (2011) [19] cannot be applied.” Further, our description of DPSGD on ll 224 was very brief and relies on the reader to remember DPSGD from ll 46. We therefore replace this with: “DPSGD [1] which adds noise to minibatch gradient evaluations in SGD and clips the gradients at some value to artificially bound their sensitivity.” ### Clarifying the first paragraph in page 7 which discusses the computational guarantees Sec. 4 introduces that while OPS sampling from the $\beta$D-Bayes posterior is $(\epsilon, 0)$-DP, if one uses MCMC to approximate the $\beta$D posterior the MCMC approximation needs to be accounted for in the DP analysis. The cited result from Minami [65] says that if the distribution of the MCMC chain is within in total variation distance $\gamma$ of the $\beta$D-Bayes posterior, then a sample from the MCMC chain is $(\epsilon, \delta^{\prime})$-DP, with $\delta^{\prime} = (1+e^{\epsilon})\gamma$. We know that if we run MCMC for $N = \infty$ iterations we can achieve $\gamma = 0 \Rightarrow \delta^{\prime} = 0$, but in practice only finitely many iterations are possible. The first paragraph on page 7 evokes the result from Seeman [74] saying that if the chain is geometrically ergodic, then at least order $\log(n)$ ($\Omega(\log(n)$) iterations are required to get a $\delta^{\prime} = (1+e^{\epsilon})\gamma$ to be at most order $1/n$ ($O(1/n)$), a reasonable value for the size of $\delta$ as we state in Line 34. The paragraph then discusses the sampler we consider. We used Stan’s implementation of the No-U-Turn sampler which has been shown to be geometrically ergodic (i.e. fast mixing) and comes with warnings when the chain exhibits evidence of a lack of geometric ergodicity. The conclusion is that running stan for sufficiently many iterations to not receive any warnings provides reasonable confidence of a negligible $\delta^{\prime}$. However, we agree that our statement on ll 258 can be improved. We will increase the clarity of this passage by + Defining $\delta^{\prime} = (1+e^{\epsilon})\gamma$ on l 250 and referring to it on l 260 + Adding to ll 258: ‘’Seeman et al. [74] observed that if the MCMC algorithm is geometrically ergodic achieving a delta smaller than order $1/n$ and preventing data leakage requires the chain to be run for at least order $N = log(n)$ iterations” + Finishing the paragraph with: ‘’running stan for a sufficiently many iterations to not receive any warnings provides reasonable confidence of a negligible $\delta^{\prime}$’’ This paragraph does not provide formal guarantees for using stan. Following the works of Minami et al. [65] and Wang et al. [80], we assume that a sample from a chain after convergence is representative of a sample from the posterior, and the paragraph here explains that this assumption is reasonable for the sampler we have chosen. Clearly formal guarantees are desirable but we believe these are out of the scope of this first paper. We specify on ll 269 how $\beta$D-Bayes is suitable for applications of DP-MCMC, but we leave investigating this for future work. We hope that the existence of a more precise general-purpose DP posterior encourages and facilitates new advances in DP-MCMC. ### Why the OPS method is presented as an alternative to the sensitivity method The sensitivity method is a subclass of the exponential mechanism that noise-perturbs an estimate with bounded sensitivity. In contrast, OPS (also an instance of the exponential mechanism) samples from a density proportional to a sensitivity-bounded function. We make the distinction as the noise in Bayesian sampling is not artificially added, but naturally present. ### Notation in Eq. 4 Thanks, we have changed this to $\overline{D}$. --- Rebuttal Comment 1.1: Comment: I thank the authors for the clarification. As most of my concern are presentation related, rather than content related, I still feel like the scientific community will benefit from a revised version of this paper, which better reflects the interesting ideas it discusses. I leave it to the AC to make the call regarding the choice to leave some definition details out of the main body, but in my opinion this is not a good practice despite the space limitations. --- Reply to Comment 1.1.1: Comment: We thank the reviewer for their positive feedback and are pleased our clarifications and proposed revisions were useful. We do not wish our contribution to be undermined by the presentation details kindly pointed out by the reviewer. Should the AC feel appropriate, an alternative to the proposed additions in the rebuttal would be to move Proposition 1 (which is a formal guarantee that has been provided for other OPS methods so far but does not add to the intuition of our method) to Appendix Section A.2.4 where the interested reader can read up on it alongside the accompanying necessary definitions of the Hessian and gradient cross-product matrix. With the space saved, statement and discussion of Assumptions (1) and (2) of Theorem 4 of Miller [64] relevant to Theorem 2 (which we promised in our rebuttal to add to Section A.2.3) can be moved before Theorem 2 on Section 3 ll 196, and our Condition 2 (currently ll 657 in Appendix A.2.5) can be moved before Proposition 2 on ll 283.
Summary: The paper introduces a privacy mechanism called \betaD-Bayes, which combines the one-posterior sampling (OPS) technique with the \beta-divergence to provide differentially private (DP) parameter estimation for a wide range of inference models. The goal is to ensure that sensitive information in the training data is not leaked when releasing model parameters. The authors extend the applicability of OPS to general prediction tasks and propose \betaD-Bayes as an alternative to the sensitivity method in DP estimation. The sensitivity method perturbs the function that depends on the sensitive data with noise scaled according to the sensitivity of the function. However, this approach introduces statistical bias and limits the interpretability of the released statistical estimates. OPS, on the other hand, leverages the uncertainty provided by sampling from Bayesian posterior distributions to generate interpretable DP parameter estimates. OPS has been shown to consistently learn the data-generating parameter and is not restricted to specific models like logistic regression. The authors introduce \betaD-Bayes to make OPS applicable to a broader class of inference models. They combine OPS with a robustified general Bayesian posterior that minimizes the \beta-divergence between the model and the data-generating process. \betaD-Bayes naturally provides a pseudo-log likelihood with bounded sensitivity for popular classification and regression models without modifying the underlying model. This eliminates the need to assume bounded feature spaces or clip statistical functions. Extensive empirical evidence is provided, including performance comparisons with relevant baselines on multiple datasets and analysis of sensitivity based on sample size and privacy budget. Strengths: This paper proposes a generalizable approach that can be potentially high impact. I think it could lead to broadening differential privacy research in the Bayesian inference field. The proposed approach is an efficient alternative to the sensitivity methods, and the empirical evaluation shows that the proposed approach outperforms the baselines. Overall the paper is well-written, the motivation is clear, technical contribution looks correct and strong. It also has a nice related work section and it is easy to understand the contribution. Weaknesses: The empirical study conducted in Section 5 is limited. I would suggest adding more complex models to the empirical study, e.g. neural network classification and discuss the complexity of such applications. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Can the proposed method be applied to stochastic MCMC methods? How it could be extended to neural networks? Does the complexity allow it to be applied to that type of models? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: This method, as a one posterior sampling (OPS) method, is computationally infeasible since they aim to achieve differential privacy for a sample from the exact posterior distribution. OPS methods rely on using MCMC samples to approximate the posterior distribution. However, ensuring the convergence of the MCMC sampler is crucial to avoid compromising privacy further. This part is missing at the moment, but I understand it is not within the scope of this paper. I think that could be follow-up work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for acknowledging the potentially high impact that is brought by our paper’s broadening of differential privacy research within Bayesian inference. We address your questions and concerns specifically below. ### The breadth of our empirical study (incl. neural network classification) Thank you for these comments. We included a comparison with logistic regression as this is one of the only models that can currently be tackled by the OPS methods of Wang [80] and Minami [65] as well as Chaudhuri [19], and we wanted to show that even in these cases $\beta$D-Bayes outperformed these approaches. However, we believe one of our key contributions is providing a setting in which OPS can be extended to a wider class of models. We illustrated this by considering a regression example (where DP methods struggle outside of conjugate families because of unbounded responses) with a neural network mean function. The results of this are in Fig 3. An example of neural network classification was also already included in the original version of our paper, to further demonstrate the flexibility of $\beta$D-Bayes for DP estimation, but sadly the paper is short on space. This appears in Suppl. Fig 6. ### Applications of stochastic MCMC methods for neural networks The aim of our paper was to establish $\beta$D-Bayes as an alternative to the current Gibbs posterior OPS methods. In the paper, we ran an off-the-shelf MCMC for neural networks with one hidden layer and this appeared to work well In principle, our paper proves that a sample from the exact posterior is DP and from this standpoint as long as the stochastic MCMC is run long enough to convergence then it would be applicable to our paper. A great deal of research has gone into scaling MCMC methods for logistic regression and there are tailored MCMC methods for neural networks as well. We hope that the considerable improvements in performance make the increased computational costs worthwhile. Our experiments show that $\beta$D-Bayes OPS outperforms Chaudhuri [19] and DPSGD [1] as well as the Gibbs OPS methods. We hope that the performance of our method encourages further research that can tackle these computational challenges, including scaling MCMC to large neural networks or alternatively DP variational Bayes methods to approximate the $\beta$D-Bayes posterior [41, 45, 70]. These however require different theoretical tools that are out of the scope of this paper. Finally, we note that these procedures cannot be repeated, as repeated estimation would leak information, and only require one posterior sample (after reaching stationarity). As a result, it is reasonable to trade-off performance gains for computational costs. To Sec. 6 l. 377 we will add: > A further limitation of OPS is the computational burden required to produce posterior samples, particularly in larger neural networks with many parameters. However, we argue that the improved performance justified such a cost and is mitigated by the fact that such inference can only be run once to avoid leaking privacy and that only one posterior sample is required. We hope that our method encourages further research into computational procedures for these models, including scaling MCMC to large neural networks or Developing DP variational inferences approaches [41, 45, 70] to the $\beta$D-Bayes posterior is another option here. ### Interesting follow up work could look at ensuring the convergence of the MCMC to avoid compromising privacy further You are absolutely right. We proved that exact sampling from the $\beta$D-Bayes posterior was $(\epsilon, 0)$-DP where in reality one would normally use MCMC to approximate sampling from this posterior. In Sec. 4 we refer to a Thm. of Minami [65] which says that if the distribution of the MCMC chain is within in total variation distance $\gamma$ of the $\beta$D-Bayes posterior then a sample from the MCMC chain is $(\epsilon, \delta^{\prime})$-DP, with $\delta^{\prime} = (1+e^{\epsilon})\gamma$. The result from Seeman et al. [74] (stated on l. 258) observed that if the MCMC algorithm is geometrically ergodic achieving a delta at most of order $1/n$ and preventing data leakage requires the chain to be run for at least order $N = log(n)$ iterations. Finally, we point out that stan’s implementation of the No-U-Turn sampler has been shown to be geometrically ergodic (i.e. fast mixing) and comes with warnings when the chain exhibits evidence of a lack of geometric ergodicity. The conclusion is that running stan for sufficiently many iterations to not receive any warnings provides reasonable confidence of a negligible $\delta^{\prime}$. However, as you point out exact guarantees are crucial to avoid compromising privacy further. We agree it is somewhat beyond the scope of our first paper and an excellent idea for follow-up work. There are two avenues here 1. Deploy $\beta$D within methods that specifically design MCMC chains to be differentially private (DP-MCMC). Prop. 2 in Sec. 4 outlines the conditions required of the model for $\beta$D-Bayes to be immediately applicable to some popular methods. Incidentally, these methods often rely on stochastic MCMC methods which you mentioned above. 2. Try to take advantage of the boundedness of the $\beta$D-Bayes loss function to provide convergence guarantees for a finite sample chain from a particular algorithm. This for example was done in Prop. 13 of Minami [65] for a small class of well-behaved models. We have alluded to these in Sec. 6 ll 370-377. We aimed to propose a posterior that is more precise and widely usable than current methods and hope this inspires further research in this direction. --- Rebuttal Comment 1.1: Title: Response to rebuttal Comment: Thank you for your detailed response and addressing my comments. I read the other reviews and the answers to the reviews. I already thought the community can benefit from this paper and I believe the revised version with suggested changes will be even stronger. I keep my accept score as is. --- Reply to Comment 1.1.1: Comment: We are really pleased that you feel our proposed changes would further strengthen the paper. Thank you for your feedback and consideration.
Rebuttal 1: Rebuttal: We would like to thank all reviewers for their thoughtful feedback and for appreciating the importance of our contribution. We have changed all minor comments, and addressed their suggestions in individual responses. Alongside our rebuttal we provide a PDF demonstrating how we will alleviate some of the reviewers’ concerns. **Reviewers 9D13, fSk8 and D3oa** found the demonstration of our results in Fig 2 and 3 confusing. In particular, the plot was cluttered with too many lines; some that looked similar. The goal of Fig 2 and 3 is to compare the performance of the different private methods for the same level of privacy, and not the performance of their non-private analogs. To address these concerns, Fig S1 (in the attached PDF) provides an updated version of Fig 2 where we have removed the non-private methods (grey/black dotted lines). The plot now compares the four DP methods for the same $\epsilon$ on simulated and real data sets and shows that for large enough \# observations $\beta$D-Bayes OPS incurs the least statistical error. We will also remove the non-DP methods from Fig 3 to improve its readability. Following comments from **Reviewer D3oa**, we also extrapolated the figures to display the full data set sizes (to the benefit of our method). Before we presented the results only on a subset of the data for clarity reasons. Now that the dashed lines are removed, the results on the full data set can be displayed without loss in readability. We run the method on repeated subsamples of the data in order to provide an idea of how the methods compare for different sample sizes. We do not consider subsamples for computational reasons. We could run our simulations for more than $n = 1000$, however from the top line of Fig S1 (in the attached PDF), we believe it is already clear that $\beta$D-Bayes performs the best. We hope that from the updated Fig S1, it is clear that for the simulations and real data for all considered values of $\epsilon$ for large enough \# observation $\beta$D-Bayes is the best (lowest for log RMSE top plots, highest for AUC bottom plots). Fig S2 (in the attached PDF) illustrates a separate comparison of private $\beta$D-Bayes inference with its non-private analog (i.e. the posterior mean) and these plots for all the methods will be added to the appendix. As pointed out by **Reviewers 9D13 and D3oa**, PM stands for posterior mean and this is now explicitly noted in a revised version of the paper. Fig S3 (in the attached PDF) addresses the comment of **Reviewer D3oa** who asked if $\beta$D-Bayes could ‘’replicate[d] a finding without DP’’. For logistic regression, we understood a finding as estimating a coefficient as being of a particular sign. For our real and simulated data, Fig S3 looks at the CSA (correct sign accuracy), the proportion of the time the DP-parameter estimates from the different methods agree with those of a non-private baselines (i.e. l2-penalised logistic regression implemented in sklearn). The plot demonstrates that the estimates from $\beta$D-Bayes OPS would agree with the sign of the non-private method more of the time than the other methods. Pdf: /pdf/a10cf1660330e86c0abf311ffa03f2066e7a4a61.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Exploiting Connections between Lipschitz Structures for Certifiably Robust Deep Equilibrium Models
Accept (poster)
Summary: Since the introduction of Deep Equilibrium Models (DEQs), many papers have been written about the certified robustness properties of DEQ models including MonDEQ and GMonDEQ. In addition, many methods have been proposed to study the certified robustness of conventional neural works, such as AOL, SLL, and Sandwich Layers. In this paper, the authors show that all methods aforementioned can be encapsulated under the LBEN framework. The authors conclude this paper by showing experiment results for the MNIST dataset. Strengths: This paper is well written and easy to read. I believe the originality and significance of this paper lies in the observation that the LBEN framework can encapsulate methods like MonDEQ and GMonDEQ. In addition, the authors also discuss the reparametrization to fit MonDEQ into the LBEN framework, which the authors correctly identified as non trivial. Weaknesses: I believe this paper would benefit from a clear discussion of the application scenarios of the proposed method. For example, when does the proposed method fail to provide satisfactory certified robustness guarantees, and when is it good? In addition, I believe non MNIST/CIFAR experiments should be run in order to demonstrate the effectiveness of the proposed method. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: 1. The authors note in their paper that "one can potentially fine-tune the trained models from MonDEQ, SLL, and AOL by reparameterizing the networks as LBEN and initializing the LBEN training from these reparameterized models". Why would this be better than simply training an LBEN model? If there is any difference, under which circumstances would one be better than the other? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: There are no negative societal impacts of this work Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your thoughtful and thorough evaluation. We are happy to address your concerns and questions. **I believe this paper would benefit from a clear discussion of the application scenarios of the proposed method. For example, when does the proposed method fail to provide satisfactory certified robustness guarantees, and when is it good? In addition, I believe non MNIST/CIFAR experiments should be run in order to demonstrate the effectiveness of the proposed method.** This is a very interesting comment, which will add a lot of value to our draft when addressed. The scalability in training LBENs is the main open issue. Our method will provide good certified robustness results when a relatively small 1-Lipschitz structure can be used to provide useful pretrained features for the tasks at hand. Due to scalability issues of training DEQ, applying them to larger datasets like TinyImageNet remains very challenging. We tried our method on TinyImageNet, but it failed due to this scalability issue. Then we tried our method (LBEN initialized from a small SLL) on CIFAR-100, and our method does achieve the state-of-the-art with respect to certified robustness of any existing DEQ model on CIFAR100. By fine-tuning the LBEN initialized from SLL, we are able to see some improvements on certified robustness. Despite being marginal, these improvements are sufficient for giving us the state-of-the-art certified robustness results of DEQs on CIFAR100. |model | clean accuracy | $\epsilon = 36/255$ | $\epsilon= 72/255$ | $\epsilon = 108/255$ | $\epsilon = 1$ | | ---- | ------ | --- | --- | --- | --- | | SLL | 0.398 | 0.288 | 0.207 | 0.149 | 0.038 | | LBEN | 0.292 | 0.176 | 0.117 | 0.078 | 0.015| | LBEN with SLL init | 0.403 | 0.291 | 0.209 | 0.153 | 0.041 | Generally, explicit feed-forward convolutional models have very good inductive biases that help them extract useful features and perform well on vision tasks. Although, DEQ are technically more expressive than feed-forward networks, they lack many of these important inductive biases. In our paper we show how one may initialize a DEQ (as LBEN) with the useful features of SLL while maintaining its Lipschitz constant. Admittedly, some of the vision benchmarks are not quite state-of-the-art with respect to the best explicit feed-forward SLL models because we are using much smaller initialized models. This is due to 1) scalability issues inherent to training DEQ and 2) a technical subtlety of embedding a convolutional SLL model into a DEQ that requires a much larger convolutional kernel. This also currently prevents us from scaling up to larger vision tasks like TinyImageNet, but may be circumvented in the future. For applications using standard fully-connected layers, the representation memory footprint of an equivalent DEQ is not as much of an issue. If one wants to achieve good certified robustness on very large vision tasks ImageNet for example, explicit models may be more appropriate as of right now. We will make sure to address these subtle tradeoffs and applications in the final version of the draft. ## **The authors note in their paper that "one can potentially fine-tune the trained models from MonDEQ, SLL, and AOL by reparameterizing the networks as LBEN and initializing the LBEN training from these reparameterized models". Why would this be better than simply training an LBEN model? If there is any difference, under which circumstances would one be better than the other?** The performance of neural networks is highly dependent on good initialization. Additionally, explicit feed-forward 1-Lipschitz convolutional networks have important inductive biases that have been crucial for certifiably robust image-classification tasks. In contrast, our current understanding on how to incorporate the right inductive biases for LBEN in the context of certified robustness is relative limited. Therefore, fine-tuning LBEN from 1-Lipschitz layers with good inductive biases (such as SLL) can help LBEN achieve good certified robustness via combining the benefits of inductive biases of feed-forward 1-Lipschitz networks and the expressive advantage of LBEN over explicit networks. --- Rebuttal Comment 1.1: Comment: Many thanks to the authors for taking time to address my questions. I have also read carefully over the feedback of other reviewers. I would leave the question of whether this paper should be accepted for the AC to decide.
Summary: The paper addresses the robustness certification of deep equilibrium models (DEQs). It proposes a novel approach that generalizes classical Lipschitz-constrained networks by presenting them as special cases of Lipschitz-bounded equilibrium networks (LBEN). The researchers' contribution is two-fold: first, they provide conditions for a DEQ to be L-Lipschitz and extend these conditions to the reparameterization of DEQs. Second, they establish a connection between SDP-based Lipschitz layers (SLL), almost orthogonal layers (AOL), sandwich layers, and LBEN. This unique approach allows for an improved certified robustness of DEQs, further enhancing the applicability and reliability of these machine learning models. Strengths: -The paper interestingly addresses the problem of Lipschitz constant certification for different types of parameterizations. This is a noteworthy contribution to the field. -The linkage between DEQ and 1-Lipschitz neural network, considering different parameterizations, provides a novel perspective on the analysis of 1-Lipschitz neural networks. The experimental results suggest this could be an efficient approach for initializing DEQ with a Lipschitz certificate. -The paper is very well written, self-contained, and reader-friendly. The state-of-the-art seems comprehensive and complete. Weaknesses: -The experimental section of the paper is somewhat limited and doesn't fully cover all the propositions put forth in the paper. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: The paper does an excellent job bridging different types of DEQs and 1-Lipschitz neural networks, which I find commendable. However, the experimental part is very limited and doesn't match the ambition of the preceding sections -For instance, why are there no baselines with unconstrained DEQs? -Why was a different initialization scheme used for the two experiments? SLL could have been used for both the MNIST and CIFAR datasets. -The justification for not displaying results for AOL and sandwich layers only focuses on the robustness certification. However, AOL provides more than just Lipschitz guarantees, including quasi orthogonality. The paper would have a broader impact if it considered all different types of 1-Lipschitz parameterizations covered by the LBEN formalization in the experiments. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: The authors adequately addressed the limitations Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your thoughtful and thorough evaluation. We are happy to address your concerns and questions. **The paper does an excellent job bridging different types of DEQs and 1-Lipschitz neural networks, which I find commendable. However, the experimental part is very limited and doesn't match the ambition of the preceding sections** Thank you for your comment. Please refer to our general comment on the significance of our experimental results to see more details on why we believe our results do justify our theoretical claims. **-For instance, why are there no baselines with unconstrained DEQs?** An unconstrained DEQ will have a higher clean accuracy compared to a constrained model, however in this paper we are concerned with certified robustness. The resulting Lipschitz constant of the unconstrained model is typically much larger than $1$ and leads to certified accuracy near zero based on the margin argument. [PWK21] does an evaluation on certified robustness on MNIST and CIFAR10 but the certified accuracy obtained is very low. For CIFAR10 with $\epsilon = 0.01$ (Figure 9 in [PWK21]), the best achieved certified robust accuracy in [PWK21] is roughly 10%. No results for the standard $\epsilon=36/255$ is given. **-Why was a different initialization scheme used for the two experiments? SLL could have been used for both the MNIST and CIFAR datasets.** Sorry for the confusion. We will clarify this in our revised draft. There are a few reasons for this. In the original Lip-MonDEQ paper, MNIST is used as a benchmark, where the original SLL work focused on CIFAR benchmarks. Before our work, Lip-MonDEQ already achieved reasonable certified robustness on MNIST and we only include MNIST results to showcase how our theoretical connections between Lip-MonDEQ and LBEN can further improve a Lip-MonDEQ model. The CIFAR setting is where our method really makes a difference, since previously DEQs do not achieve good certified robustness on these tasks. Previously, SLL works well for CIFAR and hence we use SLL as initial conditions when considering CIFAR. **-The justification for not displaying results for AOL and sandwich layers only focuses on the robustness certification. However, AOL provides more than just Lipschitz guarantees, including quasi orthogonality. The paper would have a broader impact if it considered all different types of 1-Lipschitz parameterizations covered by the LBEN formalization in the experiments.** Thanks for this useful feedback. In the experiments, we have only succeeded with the SLL initializations. Investigating how to achieve good certified robustness results for LBEN initialized from AOL and Sandwich on large datasets is definitely an important future task, and we want to highlight a few challenges here. Although we do show explicitly how LBEN can be initialized from an AOL, and is indeed an equivalent DEQ network at initialization, the quasi-orthogonality property will not be preserved after further training unless additional special structure is assumed about the LBEN parameterization. In addition, AOL typically requires the use of GroupSort activation to address the gradient vanishing issue, and the current LBEN theory does not address GroupSort activations which are not sloped-restricted on [0,1]. In contrast, SLL with ReLU activations automatically address the gradient vanishing issue due to the residual network structure, and can be connected to the current LBEN theory. For sandwich layers, the current convolution forms use stride. How to incorporate consistent convolution structures for LBEN remains an open question. Although our theory does provide a clean connection between LBEN and Sandwich, how to use such a connection for vision tasks requires further study on convolutional LBEN. --- Rebuttal Comment 1.1: Comment: Thank you for addressing my query. I stand by my score and recommend accepting the paper. --- Reply to Comment 1.1.1: Comment: Thank you for considering our rebuttal. We appreciate your queries and feedback.
Summary: This paper demonstrates that various widely-used Lipschitz network structures, including convex potential layers (CPL), SDP-based Lipschitz layers (SLL), almost orthogonal layers (AOL), Sandwich layers, and monotone DEQs (MonDEQ), can all be reparameterized as specific cases of the Lipschitz-bounded equilibrium networks (LBEN). This reparameterization does not alter the prescribed Lipschitz constant in the original network parameterization. A notable aspect of our reparameterization technique is that it maintains the Lipschitz prescription utilized in different structures. Strengths: [Strengths] The paper is well-structured and clearly written. The authors offer a high-level understanding of CPL, AOL, Sandwich layers, and MonDEQ, which can all be reparameterized as specific cases of LBEN. Weaknesses: [Weaknesses] While the theoretical results are intriguing, I believe that additional numerical results should be conducted to further validate the theorem. The numerical results are quite preliminary and only discuss the MNIST and CIFAR10. I argue that these results do not sufficiently support the paper's theorem. The results on CIFAR10 are not convincing. LBEN, when initialized from SLL, does not clearly outperform the SLL network. Technical Quality: 3 good Clarity: 3 good Questions for Authors: I question whether a 1-Lipschitz is necessary, as a small Lipschitz constant can significantly limit the representation ability. Why is a K-Lipschitz, where K is a limited constant, not acceptable? Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: Overall, while the theoretical proof is elegant, the experimental results do not adequately support the theoretical conclusions. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your thought and thorough evaluation. We respond to your comments as below. **While the theoretical results are intriguing, I believe that additional numerical results should be conducted to further validate the theorem. The numerical results are quite preliminary and only discuss the MNIST and CIFAR10. I argue that these results do not sufficiently support the paper's theorem. The results on CIFAR10 are not convincing. LBEN, when initialized from SLL, does not clearly outperform the SLL network.** Thank you for comment. We have provided additional results on CIFAR100 which are also currently state-of-the-art with respect to certified robustness of DEQ. Please refer to our general comment on the significance of our experimental results to see our explanation on why we believe our numerical results do justify our theoretical claims (our key theoretical claim is that it is crucial to maintain the prescribed Lipschitz constant when reparameterizing other Lipschitz structures as LBEN). Due to scalability issues of training DEQ, applying them to larger datasets like TinyImageNet remains very challenging. We have evaluated LBEN initialized from a small SLL network on CIFAR-100 which achieves the state-of-the-art with respect to certified robustness of any existing DEQ model. Please note there is still a significant performance gap from state-of-the-art due to scalability issues of DEQ. By fine-tuning the LBEN initialized from SLL, we are able to see some improvements on certified robustness. Despite being marginal, such improvements are enough for giving us the state-of-the-art certified robustness results for DEQs on CIFAR100. |model | clean accuracy | $\epsilon = 36/255$ | $\epsilon= 72/255$ | $\epsilon = 108/255$ | $\epsilon = 1$ | | ---- | ------ | --- | --- | --- | --- | | SLL | 0.398 | 0.288 | 0.207 | 0.149 | 0.038 | | LBEN | 0.292 | 0.176 | 0.117 | 0.078 | 0.015| | LBEN with SLL init | 0.403 | 0.291 | 0.209 | 0.153 | 0.041 | ## **I question whether a 1-Lipschitz is necessary, as a small Lipschitz constant can significantly limit the representation ability. Why is a K-Lipschitz, where K is a limited constant, not acceptable?** This is an interesting question and non-obvious aspect of training robust Lipschitz-constrained networks. Larger Lipschitz constants for LBEN are tried in [RWM20] on CIFAR 10 ($L=2,3,5,50$) which slightly improves the clean accuracy, but **decreases the empirical robustness** when compared to the 1-Lipschitz LBEN. We have tried to use different L to improve our certified robustness results, but we did not succeed in achieving better results with larger $L$. The best certified robustness result achieved by our approach on CIFAR10 is indeed based on choosing $L=1$. One intuitive explanation for this is that the understanding of how to incorporate inductive bias via enforcing convolution structures on 1-Lipschitz layers is relatively matured, and hence choosing $L=1$ to make the Lipschitz constant consistent with these structures leads to the best certified robustness result for now. If we use LBEN with different $L$, we will have difficulty in coming up the right features for certifiably robust classification tasks. In contrast, if we use LBEN with $L=1$, we are directly fine-tuning based on the useful features learned by 1-Lipschitz networks. - [RWM20] Lipschitz-bounded equilibrium networks --- Rebuttal Comment 1.1: Title: Official Review Comments by Reviewer gX3G Comment: Thanks for your responses. - While I still find the experiments to be somewhat lacking in solidity, I maintain my view that this is a commendable paper. I am particularly satisfied with the high-level understanding of CPL, AOL, Sandwich layers, and MonDEQ. I raise my score to 6. --- Reply to Comment 1.1.1: Comment: Thank you for taking our rebuttal into consideration. We appreciate your updated evaluation.
Summary: The paper studies the l2-certified robustness of DEQs from the Lipschitz bounded view. They not only proved the advantages of DEQs against other models on certifiable robustness but also show the links between other popular Lipschitz layers like convex potential layers, SDP-based Lipschitz layers, almost orthogonal layers and Lipschtiz bounded equilibrium models. Based on the relation, they can use pre-trained SLL models as initialization to help their DEQs training. Strengths: Their writing is clear. They show the relationship between other explicit Lipschitz networks and LBENs by a new reparameterizing technique. By using this technique, they use a pre-trained SLL model to help DEQ's training and obtain better certifiable results. Furthermore, I think they may show a good technique to accelerate DEQs training if their parameterization technique can extend to more model training. Weaknesses: The results for SLL are much lower than their paper's report. More than 4% lower for natural accuracy and about 10% lower for 72/255 certified accuracy compared with SLL small. You may also compare CIFAR-100 and TinyImageNet in your empirical section. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Can such fine-tuning technique help other DEQs training for natural tasks like using MDEQ[1] or MOptEqs[2]? [1] Multiscale deep equilibrium models [2] Optimization inspired Multi-Branch Equilibrium Models Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Nothing Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your thoughtful evaluation and comments. Now we provide detailed responses to each of your comments. ## **The results for SLL are much lower than their paper's report. More than 4% lower for natural accuracy and about 10% lower for 72/255 certified accuracy compared with SLL small.** We agree that this difference in performance requires more explanation. We remark that, due to scalability issues that are inherent to DEQ, we train smaller SLL networks than what is presented for the state-of-the-art in Araujo et al. The SLL network used in our paper consists of $500k$ parameters, where the``SLL Small'' architecture used in Araujo et al has $40m$ parameters, leading to the performance gap. Note that this difference is mainly due to scalability issues of DEQ, however this is still the highest certified accuracy achieved by a DEQ model. We will provide these additional architecture details in the final version. ## **You may also compare CIFAR-100 and TinyImageNet in your empirical section.** Due to scalability issues of training DEQ, applying them to larger datasets like TinyImageNet remains very challenging. We have evaluated LBEN initialized from a small SLL network on CIFAR-100 which achieves the state-of-the-art with respect to certified robustness of any existing DEQ model. Please note there is still a significant performance gap from state-of-the-art due to scalability issues of DEQ. By fine-tuning the LBEN initialized from SLL, we are able to see some improvements on certified robustness. |model | clean accuracy | $\epsilon = 36/255$ | $\epsilon= 72/255$ | $\epsilon = 108/255$ | $\epsilon = 1$ | | ---- | ------ | --- | --- | --- | --- | | SLL | 0.398 | 0.288 | 0.207 | 0.149 | 0.038 | | LBEN | 0.292 | 0.176 | 0.117 | 0.078 | 0.015| | LBEN with SLL init | 0.403 | 0.291 | 0.209 | 0.153 | 0.041 | ## **Can such fine-tuning technique help other DEQs training for natural tasks like using MDEQ[1] or MOptEqs[2]? [1] Multiscale deep equilibrium models [2] Optimization inspired Multi-Branch Equilibrium Models** Thank your intriguing question. In order to accommodate MDEQ[1], a modification of the LBEN theory and parameterization would be required to prove the Lipschitz property. This is because MDEQ has a special residual block DEQ structure, where the DEQ assumed in MonDEQ and LBEN are simply single layer DEQ. For MOptEqs, it is conceivable to parameterize the multiple parallel branch DEQ structure to guarantee $L$-Lipschitzness of the model and make use of their optimization approach, however this would require new theory since it is not a standard LBEN. This would be very interesting future work.
Rebuttal 1: Rebuttal: # General Response First of all, we would like to thank each reviewer for their constructive feedback. We are glad to see that our theoretical connections on Lipschitz structures were generally well-received and there seems to be many interesting future directions. We would like to clarify some aspects of our experimental results and how they justify the claims we make in our paper. ## General comment about the significance of our experiments Currently the understanding of the certified robustness of DEQ is very restricted, and hence our work is more exploratory in nature. However, based on the theoretical and empirical contributions, we believe it is fair to claim that our work succeeded in (i) "advancing our understanding of certified robustness of DEQ" and (ii) "improving certified robustness of DEQ on challenging tasks such as CIFAR10." Based on our theoretical insight, we use SLL to initialize LBEN with $L=1$ (consequently the value of $\sqrt{2}L\epsilon$ is preserved) and improve the SOTA certified robust accuracy of DEQ on CIFAR10 from roughly 10% for $\epsilon=0.01\%$ (this is the result from [PWK21] which uses MonDEQ) to 64.6% for $\epsilon=0.01\%$ and 56.2% for $\epsilon=36/255$. We were unable to obtain a remotely good certified robustness result training MonDEQ from scratch on CIFAR10. It seems that MonDEQ or G-MonDEQ have difficulty in learning useful features while maintaining the Lipschitz property and good prediction margins starting from scratch. In contrast, residual networks such as SLL are better in extracting useful features from scratch. One interpretation for our empirical results is that we use some useful features learned by SLL and then improve upon it using the expressive power of the DEQ structure. We believe our empirical results are sufficient in supporting our main theoretic idea that maintaining the value of $L$ during reparameterizing models as LBEN is important for certified robustness of DEQ. Admittedly, some of the results are not quite state-of-the-art with respect to the best explicit feed-forward SLL models because we are using much smaller initialized models. This is due to 1) scalability issues inherent to training DEQ and 2) a technical subtlety of embedding a convolutional SLL model into a DEQ that requires a much larger convolutional kernel. This also currently prevents us from scaling up to larger vision tasks like TinyImageNet, but may be circumvented in the future. For applications using standard fully-connected layers, the representation memory footprint of an equivalent DEQ is not as much of an issue. - [PWK21] Estimating Lipschitz constants of monotone deep equilibrium models
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Analyzing Generalization of Neural Networks through Loss Path Kernels
Accept (poster)
Summary: Thanks to authors for their submission. Their paper tackles generalization of neural network from neural tangent perspective applicable on gradient flow. The novelty of the paper stems from the new tangent kernel $\overline{K}(w,z,z')$ proposed. It is defined as an inner product of loss gradients w.r.t. weights and evaluated at two data points $z$ and $z'$. This leads to Loss Path Kernel defined as a path integral w.r.t. $t$ of $\overline{K}(w(t),z,z')$. After introduction and recollection of NTK theory the definitions are laid out. The paper continues with proving the gradient flow loss is general kernel machine followed up by Rademacher complexity bound derivation. The later section 5 extends these results to SGD. Finally results are used to design neural architecture search (NAS) and demonstrate favorable performance compared with state-of-the-art NAS algorithms through numerical experiments. Paper follows an interesting idea of generalising tangent kernels to path integrals. It is timely and potentially interesting contribution. Being theoretical work, however, it lacks the clarity in presentation, rigor in definitions and theorems and this strongly undermines its validity. The amendments are recommended int he bellow. Strengths: + Well introduced paper with well stated contributions in Table1 presenting paper's contributions to NTK theory. + Paper strives for applicability and results are used to design of neural architecture search (NAS) supported by (limited) numerical experiments. Weaknesses: - $\textbf{[Definition of the Loss Path Kernel]}$ For instance the line 108:The Theorem 2 "... shows that the loss of the NN at a certain fixed time ($T$) is a general kernel machine ...". But is the limit (when $T \rightarrow \infty$ and/or $dt \rightarrow 0$) also in RKHS? Under what conditions? The existence/convergence of the integral is not commented neither in the body nor in the appendices. I would urge authors to detail this out very thoroughly as assumptions needed for a definition to work would possibly limit the applicability of the results. For instance the Lipschitz gradients may be needed, because loss may be Lipschitz locally, but its norm may be growing with $t$, e.g., in case of overfitting, and the integral (LPK) may still diverge as $T \rightarrow \infty$. - I try to provide alternative view: In case of the gradient flow over whole training data set considered by paper, i.e., gradient descent with infinitesimal steps, as opposed to stochastic (mini-batch) gradient descent, the weight updates are locally diffeomorphisms (locally invertible smooth maps - that is what paper implicitly assumes by infinitesimal steps). Then the kernel (LTK) $\overline{K}(z,z')$ (Definition 3) can be seen as a pull-back inner product used for instance in [Principles of riemannian geometry in neural networks, Hauser, Michael and Ray, Asok, Advances in neural information processing systems, vol=30,2017] and the crucial Loss Path Kernel (LPK) $K_T(z,z')$ (Definition 4) is defined as a length of the parameterised curve $f \odot w(t)$ w.r.t. this inner product $\textbf{if it exists}$! It does not in general unless inner products along the path "behave nicely". - Following up on the above, assumptions required by Corollary 1 should be listed thoroughly. Line 203 reads ..."assuming $\ell({w},z)$ is $L_{\ell}^2$-Lipschitz ... ". Is it needed? If so, is it enough for Definitions to be well defined (see above). - $\textbf{[Corollary 1 (Generalization bound for NN)]}$ the lines 203-208 are very vague, using terms "usually, often etc." which degrade validity of the Corollary 1 and thus the main result of the paper. I strongly advise authors to avoid such terms. Especially in connection to the most important result of the paper. Try to clarify what is assumed, what is already existing result (please cite sources, if so) and what is hypothesis. For instance: l: 203-205: "... $\textbf{assuming}$ the $\ell(w, z)$ is $L_{\ell}$-Lipschitz. The second component is $\textbf{often}$ not the dominant term since GD is a stable algorithm when the training data are modified slightly [24, 8]. It $\textbf{usually}$ decreases with $n$ since GD becomes more stable when $n$ grows. ..." - $\textbf{[ $1/n^2\sum_{I,j}K_T(z'_i,z'_j;S′)≤B^2$ ,see l 192]}$ This condition limits the datasets $S'$ LTK and LPK can be considered over. How does this limit the applicability/generality of results? - $\textbf{[SGD, Section 5]}$ This section generalizes the previous results (bounds) to stochastic gradient descent. The corresponding gradient flow remains utterly deterministic, however. As opposed to previous works, where SGD is related to Langevin dynamics, [Stochastic modified equations and adaptive stochastic gradient algorithms, Li, Qianxiao, Tai, Cheng, Weinan, E, International Conference on Machine Learning, 2017], where path integrals involve higher terms due to quadratic variation induced by stochasticity. The paper neglects this. To improve the impact of the paper extended argument why deterministic flow (ODE) applies and what are assumptions needed. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: Q0: Figure 2c, Fig A4,d) and same for Fig. A5, d) Generalization gap is close to zero from the beginning and increasing later on (overfitting). It suggests the scale is large and thus Rademarcher upper bound is really vacuous compared to a real generalization gap (blue line), despite it improved on previous bounds as reported. Could authors rescale, redo experiments so Figure is more convincing? This is to support the strong claim of the paper mentioned on lines l43 and l55 again and l92 again” …” strong correlation (of the hereby derived upper bound) with generalization error” … and …”highly correlated” … Q1: Could authors elaborate in detail on assumptions needed for Loss Path Kernel (LPK) $K_T(z,z')$ to be defined for arbitrary $T$? Or is $T$ limited (early stopping assumed)? See section Weaknesses for details. Q2: The same as Q1 just for stochastic gradient descent versions in Section 5. Also see Weaknesses [SGD, Section 5]. Q3: Condition $\frac{1}{n^2}\sum_{i,j} K_T(z'_i, z'_j ; S') \leq B^2$ on the line $192$ and later similarly for SGD kernels (definition on line 242): This condition limits the datasets $S'$ LTK and LPK can be considered over. How does this limit the applicability/generality of results? Q4: $\textbf{Gradient flow vs. Realistic NNs}$ How does the bound deteriorate when practical scenarios, e.g., finite and not infinitesimal learning rate, noise etc., are considered? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 2 fair Contribution: 3 good Limitations: I find missing assumptions and not stated limitations to be the main weakness of the otherwise very interesting paper. See leading questions: Q1, Q2, Q3 and eventually Q4: $\textbf{Gradient flow vs. Realistic NNs}$ Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their careful reading of our paper and constructive comments! --- **Q1. Definition of loss path kernel and the existence of the integral.** A1. Indeed, this is a great question. Please allow us to address your concerns point-by-point below [this discussion will be added to the updated paper]. - [Finite training time $T$]. Throughout this paper, we restrict our attention to a finite time $T$, where $T$ is bounded by a predetermined constant. We will mention this explicitly in the revised paper. We do not consider the asymptotic behavior of the loss function as $T \rightarrow \infty$. This is a practical setting since the training time for NNs typically has an upper limit to control computational expenses and prevent overfitting. - [Existence of gradient flow]. The existence of gradient flow (dt -> 0) and its integrals are implicitly assumed in previous NTK papers [1, 2, 3]. The gradient flow is well-defined for a wide variety of conditions on the function. For example, Lipschitz-continuity of the gradient or semi-convexity [4]. - [Continuity of loss tangent kernel (LTK)]. By the assumption that the loss is continuously differentiable in line 107, the loss gradient $\nabla_w \ell(w(t), z)$ is continuous w.r.t. $w(t)$. Since $w(t)$ is differentiable (therefore continuous) w.r.t. $t$ by the gradient flow equation (Eq. (1), line 175), $\nabla_w \ell(w(t), z)$ is continuous w.r.t. $t$. After the inner product, the LTK $\bar{\mathsf{K}}(w(t); z, z')$ is still continuous w.r.t. $t$. - [Integrability of loss tangent kernel]. By the continuity of LTK on the compact set $[0, T]$, LTK is bounded and Riemann integrable on $[0, T]$. Therefore, the integral in LPK exists. In short, the existence of gradient flow and continuously differentiable loss together with finite training time $T$ are enough to guarantee the existence of LPK. As a side note, the validity of LPK as a kernel function is proved in Appendix B.1 lines 583-592. [1] Jacot, Arthur, Franck Gabriel, and Clément Hongler. Neural tangent kernel: Convergence and generalization in neural networks. Advances in neural information processing systems, 2018. [2] Du, Simon S., et al. Gradient descent provably optimizes over-parameterized neural networks. arXiv preprint arXiv:1810.02054, 2018. [3] Arora, Sanjeev, et al. On exact computation with an infinitely wide neural net. Advances in neural information processing systems, 2019. [4] Filippo Santambrogio. {Euclidean, metric, and Wasserstein} gradient flows: an overview. Bulletin of Mathematical Sciences, 2017. --- **Q2. Line 203 reads ..."assuming $\ell(w, z)$ is $L^2_\ell$-Lipschitz ... ". Is it needed?** A2. Corollary 1 does not require the Lipschitz continuity of the loss function. This assumption is mentioned not as a requirement, but to provide insight into the scale of the term present in Theorem 3. As a side note, continuously differentiable loss implies locally Lipschitz loss. Since $w(t): [0, T] \rightarrow \mathbb{R}^p$ is continuous on $[0, T]$, the path \{$w(t): t \in [0, T] $\} is a compact set. Then together with the loss is continuously differentiable, the loss is Lipschitz on the path $w(t)$ considered. --- **Q3. lines 203-208 are very vague, using terms "usually, often etc."** A3. Thank you for the suggestion! We will rephrase the sentence to eliminate the vague terms and make the statement more precise and rigorous. Please find the revised sentence below: ``The second term calculates the variation range of the LPK in the set $\mathcal{K}_T$. It will decrease with sample size $n$ as GD becomes more stable when $n$ grows [24, 8].’’ --- **Q4 The norm constraint limits the datasets LTK and LPK can be considered over.** A4. Please refer to Q2 in the global response. --- **Q5. SGD remains utterly deterministic.** A5. The only assumption we have imposed is that the mini-batch indices $S_t$ are specified before the algorithm is run (see line 230–232). Under this assumption, we were able to connect the trajectory of the weights updated by SGD with a deterministic flow. It is worth noting that this assumption is commonly used in generalization theory to eliminate the randomness in the selection of mini-batches [5, 6, 7]. Our findings can be extended to the scenarios involving random mini-batch selection by first conditioning on the $S_t$ and then taking an expectation over the randomness of $S_t$. We will include the above discussion in the revised paper. [5] Neu, Gergely, et al. "Information-theoretic generalization bounds for stochastic gradient descent." Conference on Learning Theory. PMLR, 2021. [6] Wang, Ziqiao, and Yongyi Mao. "On the generalization of models trained with SGD: Information-theoretic bounds and implications." International Conference on Learning Representations, 2022. [7] Wang, Hao, Rui Gao, and Flavio P. Calmon. "Generalization Bounds for Noisy Iterative Algorithms Using Properties of Additive Noise Channels." J. Mach. Learn. Res. 24 (2023): 26-1. --- **Q6. Figure 2c, Fig A4,d) and same for Fig. A5, d) Generalization gap is close to zero from the beginning and increasing later on (overfitting)…** A6. To clarify, in Fig. 2(c), Fig. A4 (d), and Fig. A5 (d), the generalization gap and Rademacher upper bound are plotted in the same scale. Our generalization bound is tight with respect to the generalization gap; for example, the bound is $\leq 0.08$ in Fig. 2(c). In contrast, existing generalization bounds are vacuous and much larger than 1 as shown in Table 3 and 4. In response to your concerns, we will make the following changes to make our statements more precise. For Figure 2(c), Fig A4 (d), and Fig. A5 (d), we will say that the bound is “tight" instead of "maintaining a strong correlation". The high correlation we meant is the correlation between Gene$(w, S)$ and the test error of different models shown in Figure 1. --- **Q7. Gradient flow vs. Realistic NNs** A7. Please refer to Q1 in the global response. --- Rebuttal Comment 1.1: Title: We are happy to address any remaining concerns Comment: **Summary** - In Q1, we have clarified the condition for the existence of the integral in LPK. - In Q2, we have clarified some confusion about the explanation of the Theorem. - In Q3, we will rephrase the sentence to eliminate the vague terms and make the statement more precise and rigorous. - In Q4, we explained the meaning of the norm constraint and the way to eliminate it. - In Q5, we clarified that deterministic SGD is a common assumption in generalization theory and our results can be extended to the random SGD case. - In Q6, we clarified that the generalization gap and Rademacher upper bound are plotted in the same scale, and we will revise our statements to be more precise. - In Q7, we clarified our experiments are conducted under practical scenarios and showed our results can be extended to the gradient descent with a finite learning rate setting. We hope our responses have addressed all your concerns. Please let us know if you have any further questions and we are happy to address them!
Summary: The paper proposes a new complexity measure for neural networks based on tracking the changes of the weight vector of the entire model. The work is rooted in theory by linking the proposed measure through the Neural Tangent Kernel framework to Rademacher complexity resulting in a new, meaningfully tight bounds on generalisation. Strengths: Explanation of the success of overparametrised neural network models using statistical learning theory is still very important problem we haven't solved, and this works seems like a significant development in that space. The proposed complexity measure is linked theoretically to a generalisation bound. This looks like a significant amount work, extending and generalising existing theory in the Neural Tangent Kernel framework. Empirical results, though shown on small examples, show that the derived generalisation bound is meaningfully tight (i.e. non-vacuous). For the most part (with some exceptions mentioned below) the paper is very well written, and guides the reader well through very complicated subject matter. Weaknesses: The maths are very dense...and the 32 page supplement, while admirable, is way too much to get through in the time given for this reviewing period. I didn't get through the maths, so I couldn't verify it. It almost feels like this should be a journal rather than a conference paper. I did get lost a little bit in the maths of the main part of the paper. It should be possible to follow this paper at high level, taking the proofs of theorems at face value, and for the most part authors do a great job of guiding the reader....but still, there are some things (see questions below) that I found confusing. The proposed method is computationally very expensive, and so not that easy to use in practice...but approximations and computational improvements might come in the future, and the theory (assuming the proofs (which I couldn't verify) are solid) is sound and of great interest. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: As already stated, I didn't follow all the maths, but I wanted to verify my high level understanding of the work. It seems to me, that the fundamental principle here is that while the hypothesis space of the network is immensely complex (hence very high VC dimension), that during training there is only a small subset of hypotheses "visited"...and for the purpose of generalisation guarantees, what matters is the set of hypotheses "considered" (when training), not the set of hypotheses available. And the proposed loss path kernel measure is a measure of the complexity of the hypothesis space "visited". Is this a fair high level summary? I am completely lost at what $\mu^{\otimes n}$ is - is it supposed to be obvious from the context? And it all gets so complicated by Theorem 5...that I can't really tell from the math how tight the bound is. Or, since this will always depend on the training path, the tightness of the bound can only be assessed empirically? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 3 good Contribution: 4 excellent Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the kind comments and positive feedback! --------- **Q1. The proposed method is computationally very expensive, and so not that easy to use in practice** A1. Thank you for raising the question about the computational cost of our method. For small models, our method is not computationally expensive. In Experiment (I), estimating the bound with 20 $S’$ (solving 20 gradient flow ODE) costs 500s and training NN costs 0.29s. The GPU memory required by estimating the bound is 2406MB and training NN requires 1044MB. For large models, the computational cost of exactly calculating the bound might be expensive but the approximation we’ve proposed for NAS (i.e., $U_{sgd}$ in Eq. (4)) is computationally efficient. In the table below, for Experiment (III), we report the averaged computational cost (GPU hours) of our approach for one architecture and the computational cost of training one NN architecture to convergence. Note our approach calculates Gene$(w, S)$ only after training for 1 or 2 epochs, leading to significant savings in computational time. | GPU hours | CIFAR-10 | CIFAR-100 | |---------------------------------------------|----------|-----------| | RS + Gene$(w, S)_2$ (Ours) | 0.036 | 0.037 | | Training one NN architecture to convergence | 1.83 | 2.56 | We will include the above results in the revised paper. --------- **Q2. Verify high-level understanding. It seems to me, that the fundamental principle here is that while the hypothesis space of the network is immensely complex (hence very high VC dimension), that during training there is only a small subset of hypotheses "visited"...and for the purpose of generalisation guarantees, what matters is the set of hypotheses "considered" (when training), not the set of hypotheses available. And the proposed loss path kernel measure is a measure of the complexity of the hypothesis space "visited". Is this a fair high level summary?** A2. We appreciate your careful read of our paper and, yes, this summary nicely describes our work! We will include the high-level summary and intuition in the revision for clarity. Thank you again! --------- **Q3. Lost at what $\mu^{\otimes n}$ is. Can't really tell from the math how tight the bound is. Or, since this will always depend on the training path, the tightness of the bound can only be assessed empirically?** A3. - $\mu^{\otimes n}$ is the joint probability distribution of $n$ i.i.d. samples drawn from $\mu$. - The tightness of our bound can be partly seen from the lower bound of the Rademacher complexity in Appendix B.3. Our lower bound matches the trace term in the upper bound $U_1$, which shows the bound is relatively tight. We also conducted experiments to demonstrate the tightness of our bounds, a standard practice adopted in generalization theory literature [e.g., 1, 2, 3]. [1] Dziugaite, Gintare Karolina, and Daniel M. Roy. "Computing nonvacuous generalization bounds for deep (stochastic) neural networks with many more parameters than training data." arXiv preprint arXiv:1703.11008 (2017). [2] Zhou, Wenda, et al. "Non-vacuous generalization bounds at the imagenet scale: a PAC-bayesian compression approach." arXiv preprint arXiv:1804.05862 (2018). [3] Jiang, Yiding, et al. "Fantastic generalization measures and where to find them." The International Conference on Learning Representations, 2020. --------- **Summary** - In Q1, we have clarified that our method is computationally efficient for small models, and for large models, the approximation we proposed for the NAS is computationally efficient. - In Q2, we acknowledged the reviewer’s high-level summary of the paper. - In Q3, we have explained the meaning of $\mu^{\otimes n}$ and the tightness of our bound. We hope our responses have addressed all your concerns. Please let us know if you have any further questions! --- Rebuttal Comment 1.1: Title: Thanks for your reply Comment: Thank you for your reply - glad to know my high-level understanding of the presented concepts was not completely wrong. I am quite happy to stick with my strong recommendation to accept. --- Reply to Comment 1.1.1: Title: Thanks for your recommendation! Comment: Absolutely, it was a clear summary of our work! Thank you once again for your insightful response and the constructive feedback provided in your initial review. We will make sure to include the promised changes in the revised paper.
Summary: This paper introduces a new generalization bound based on dynamic NTK, called loss path kernel in the paper. The loss path kernel is a kernel based on the integration of a loss tangent kernel (NTK with loss function) so that the generalization bound can be determined by the training dataset and training trajectory of parameters. Theoretically, the paper provides the loss path kernel generalization bound guarantee and beats their bound over previous work. On the other hand, empirically, the authors verify their theorem with numerical experiments and plug their loss path kernel into NAS and get good results. Strengths: - The paper studies a very important question of the generalization gap of general neural networks. The definition of the loss path kernel is intuitive and straightforward. From my perspective, the new generalization bound based on the loss path kernel has its value. It considers the training trajectory in the bound rather than calculating the whole function hypothesis class so that the generalization bound can be tighter. - The paper provides numerical experiments to support their theorem. The numerical simulation and implementation in NAS show that their analysis can be used in practice and their theorem is not vacuous. Weaknesses: - The empirical part of writing is not clear enough. I cannot get the message between Theorem 3 and Figure 2 (c). Also, Figure 1 looks strange. The paper does not provide a convincing explanation for the outlier. The implementation of the NAS part is missing, e.g., the search space. The best architecture was not reported. - The NAS part can be stronger. The paper only considers the random sampling of 100 architectures. However, in practice, we do not use random sampling in NAS anymore because it is not efficient. There are many dynamic sampling ways in NAS, e.g., [1]. The paper may improve its results by using these methods. [1] Guo, Zichao, Xiangyu Zhang, Haoyuan Mu, Wen Heng, Zechun Liu, Yichen Wei, and Jian Sun. "Single path one-shot neural architecture search with uniform sampling." In ECCV 2020. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: - Why do we need Definition 2? It seems that the analysis did not use the general kernel machine. - The loss path kernel needs to calculate integration, in GD or SGD. I wonder how to calculate them in practice, e.g., NAS, by math formulation or sampling. - The paper only considers gradient flow. Can we extend the results to normal gradient descent with a constant learning rate? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: It seems that the analysis does not hold for the ReLU network as it is not continuously differentiable. ====== Change score from 6 to 7 on Aug 10. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the positive feedback, thoughtful comments, and for appreciating the novelty and value of the work! --- **Q1. The empirical part of writing is not clear enough. I cannot get the message between Theorem 3 and Figure 2 (c). Also, Figure 1 looks strange. The paper does not provide a convincing explanation for the outlier. The implementation of the NAS part is missing, e.g., the search space. The best architecture was not reported.** A1. Thank you for the questions and suggestions! Please allow us to clarify them point by point. - As shown in Corollary 1, $\hat{\mathcal{R}}^{gd}\_{\mathcal{S}}(\mathcal{G}\_T)$ is an upper bound of the generalization gap $L_\mu(w_T) - L_S(w_T)$. In Figure 2 (c), we plot both $\hat{\mathcal{R}}^{gd}\_{\mathcal{S}}(\mathcal{G}\_T)$ and the generalization gap to demonstrate that $\hat{\mathcal{R}}^{gd}\_{\mathcal{S}}(\mathcal{G}\_T)$ is a tight upper bound of the generalization gap. - Cause of the outlier: $U_{sgd}$ is calculated from the loss gradients along the training trajectory. NAS-Bench-201 is a NAS benchmark that contains 15625 NN architectures. When we randomly sample 100 architectures, there is a chance that we will get some “not-so-good” architectures, which have large loss gradients during training and cause a large $U_{sgd}$. - The implementation of our NAS algorithm is detailed in lines 336-357. The search space is NAS-Bench-201 [1], as indicated in line 275 and Table 2. We follow the same setting in this line of work (TENAS, LGA) for fair comparison. We will report the best architecture searched by our algorithm in the revised paper. [1] Dong, X. and Yang, Y. Nas-bench-201: Extending the scope of reproducible neural architecture search. arXiv preprint arXiv:2001.00326, 2020. --- **Q2. The NAS part can be stronger with dynamic sampling** A2. Thank you for pointing out the reference and for your valuable suggestion. We will incorporate it into the updated paper and will also attempt to reproduce our results using dynamic sampling. Additionally, we would like to highlight that the performance of our approach (93.79 for CIFAR-10) is already close to the “optimal” (94.37 for CIFAR-10) in Experiment (III) Table 2, where the “optimal” means the best test accuracy achievable in the NAS-Bench-201 search space. --- **Q3. Why do we need Definition 2? It seems that the analysis did not use the general kernel machine.** A3. We include Definition 2 for ease of reference. We use it in Theorem 2 and Theorem 4 whose results are either a general kernel machine or a sum of general kernel machines. --- **Q4. How to calculate LPK in practice, e.g., NAS, by math formulation or sampling.** A4. For small models as those in Experiments (I) and (II), we can solve the gradient flow equation (Eq. (1), line 175) to get the model parameters and calculate the LTK accordingly. Then calculating the LPK just requires solving another ODE with the calculated LTK. These ODEs can be computed with torchdiffeq package. For large models as in NAS Experiment (III), solving the gradient flow ODE is computationally infeasible. As explained in line 276-278, we applied a trapezoidal rule to approximate the integration, where $\mathsf{K}\_{t, t+1}(z, z') = \int_t^{t+1} \left\langle \nabla_{w} \ell(w(s), z), \nabla_{w} \ell(w(s), z') \right\rangle ds$ is approximated by $\frac{\eta}{2}[\left\langle \nabla_{w} \ell(w_{t}, z), \nabla_{w} \ell(w_{t}, z') \right\rangle + \left\langle \nabla_{w} \ell(w_{t+1}, z), \nabla_{w} \ell(w_{t+1}, z') \right\rangle]$. --- **Q5. Can we extend the results to normal gradient descent with a constant learning rate?** A5. This is a great question! Please refer to Q1 in the global response, where we have clarified that our experiments are conducted under practical scenarios and showed our results can be extended to the gradient descent with a finite learning rate setting. --- **Q6. The analysis does not hold for the ReLU network as it is not continuously differentiable.** A6. Our assumption of continuously differentiable NN is mainly for the loss path kernel to be a valid kernel. As a valid kernel, it needs to be continuous for its input (See Proposition 1 and lines 583-589). We believe with a finer analysis, our theory could be extended to non-smooth cases such as ReLU NNs. For example, finite input space does not have the continuity requirement according to Proposition 1. Another approach is writing the LPK explicitly as an inner product of feature mappings according to the definition of the kernel to verify its validity. Finally, we would like to highlight that our theory is very general as it holds for any continuously differentiable neural networks. In contrast, many prior work's analyses are tailored to ReLU networks and have other requirements, e.g. ultra-wide models and specific loss functions [2, 3]. [2] Arora, S., Du, S., Hu, W., Li, Z., and Wang, R. Fine-grained analysis of optimization and generalization for overparameterized two-layer neural networks. In International Conference on Machine Learning, pp. 322–332. PMLR, 2019. [3] Cao, Y. and Gu, Q. Generalization bounds of stochastic gradient descent for wide and deep neural networks. Advances in neural information processing systems, 32, 2019. --- **Summary** - In Q1, we clarified some confusion about our experiments. - In Q2, we will incorporate dynamic sampling to further improve the NAS algorithm in the revised paper. - In Q3, we clarified that Definition 2 is for ease of reference. - In Q4, we explained how to calculate LPK in practice. - In Q5, we clarified that our experiments are conducted under practical scenarios and showed our results can be extended to the gradient descent with a finite learning rate setting. - In Q6, we explained how to extend our theory to ReLU NNs. We hope our responses have addressed all your concerns. Please let us know if you have any further questions! --- Rebuttal Comment 1.1: Title: Increasing my score from 6 to 7 Comment: Thank you for the rebuttal. The rebuttal well-solved my questions. I have read all reviewers' comments and responses. I'd like to increase my score from 6 to 7. Three items to make the paper stronger. (1) Put the gradient descent with a finite learning rate analysis in the main body. (2) The NAS experiments can be more thorough and well-explained. (3) The paper only considers infinite-width NN as a theoretical application. It would be good to provide a finite-width 2-layer NN trained under some well-defined statistical learning problems, e.g., Mixture of Gaussians, Parity function, as a case study. This may provide more theoretical insights and comparisons with previous work. --- Reply to Comment 1.1.1: Title: Thank you! We are glad to hear that your questions get resolved. Comment: Thank you for your prompt reply and raising the score! We’re glad to know that our rebuttal has addressed your concerns. We also appreciate the further insights you’ve provided. We will enrich the main body with a discussion about extending our theory to finite learning rate analysis. We will also include a more detailed discussion of the NAS experiments, and try to apply our theory to other theoretical applications.
Summary: The submission provides data-and-architecture-dependent Rademacher complexity generalization bounds for neural networks. Unlike previous work, the approach takes the evolution of the neural tangent kernel under gradient flow into account. First gradient flow under an evolving kernel is expressed as learning with a general kernel machine on a kernel named Loss Path Kernel. Second, a data-dependent complexity measure of the general kernel machine is computed that is finally used to compute the generalization bound. Strengths: The approach is novel as far as I know, although the literature on this topic is vast and it is hard to assess novelty. While computing the bound requires access to the true data distribution and training the network, a heuristic approximation of it from the training data and a few epochs of training is shown to correlate with performance in the experiments. Background, relevant work, and the proposed method are brilliantly presented. I did not notice any glaring issues in the math although checking the long proofs in appendix in detail is not possible given the review workload. Altogether I'm leaning towards acceptance. Weaknesses: The major weaknesses of this work are the limited intuition it provides, an issue with the presentation of the experiments, and possibly high computation required to estimate the bound. I will elaborate on these points below. The second and the third point are critical and I ask the authors to improve them in the revision. W1. The results abstract away the details of training into a black-box evolving NTK and, although this improves the domain of applicability of the result, the obtained result does not give any insights about the role depth, width, or other choices of architecture on generalization. If theoretical connections between these choices and the evolution of NTK are known in the literature, this paper can be improved by theoretically studying the role of different architectural choices on the loss path kernel and its complexity. W2. The main motivation for the bound is that it is not limited to infinite-width or single output neural networks. A brief subsection 6.1 right before the experiments focuses on infinite-width neural networks to avoid the dependency on training. The subsection and its placement in the paper confused me about the following experiments. It seems like the experiments afterwards consider a finite-width neural network. If this is the case, I do not understand why this subsection is placed right before them and in the same section. W3. The experiments allude to intractability of a basic procedure to estimate the bound and computational speedups from approximation. The revision should report the computation or provide measures of complexity or at least clarify in the paper that estimating the bound is not as expensive as training the network. Otherwise one could perform architecture search by trying one or a few train/validation splits. Minor comment (did not influence score): i. If the Lipschitz constant is local, it can be influenced by the data or the trajectory of optimization in ways that are not captured in the analysis. This comment did not influence the score as it applies to previous work (Arora et al) as well and I believe fixing them is outside the scope of this work. ii. Practical neural networks are typically trained with relatively large learning rates and especially the early evolution of NTK under large learning rate and gradient flow are known to be very different. This limits the significance of a study on gradient flow. ------------ Post-rebuttal: Raised the score to 7 since the revision will address W2 and W3. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: See weaknesses Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 4 excellent Contribution: 2 fair Limitations: See weaknesses Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the positive feedback, thoughtful review, and for appreciating the merits of the work! --- **Q1. No insights about the role of depth, width, or other choices of architecture on generalization.** A1. Thank you for the insightful question! Please refer to Appendix D.3, where we have derived a norm-based bound by extending our analyses. Prior work has generally observed that wider neural networks tend to have smaller weight norms [see e.g., 1]. Therefore, our norm-based bound is expected to be smaller for such NNs. Nevertheless, we acknowledge that the exact influence of depth and width on our bound remains an active area of research that requires more exploration. Besides, our bound can be used to study the influence of width and depth empirically. We have conducted an additional experiment in the rebuttal to study the influence of width on our bound. With the same setting of Experiment (I), we train and compute our bound for NNs with widths 100 and 1000 at $T=10000$ (convergence). The 1000-width NN has a smaller bound of 0.024 compared with the 100-width NN whose bound is 0.032. This shows wider NN tends to have smaller values of our bounds and better generalization ability. Beyond depth and width, our bounds offer insights into how learning algorithms impact the generalization capabilities of neural networks. For instance, the bound $U_1$ presented in Theorem 3 suggests that the generalization gap could be influenced by the local Lipschitz constant along the training trajectory. This is because the loss path kernel can be upper bounded by the local Lipschitz constant. Meanwhile, $U_2$ indicates that generalization is contingent upon the variations in loss when trained using different data. Finally, we'd like to highlight that our results are general and can be applied to delve into various specific cases. As an illustration, we explored stable algorithms in Appendix D.2 and norm-constrained NNs in Appendix D.3. These investigations shed light on the roles of the Lipschitz constant, smoothness constant, and weight norm in neural network generalization — pivotal elements in generalization theory. We will include the above discussion in the revision, which we believe will help the readers to gain more insights on our results. Thank you for the question! [1] Neyshabur, B., Li, Z., Bhojanapalli, S., LeCun, Y., and Srebro, N. The role of over-parametrization in generalization of neural networks. In International Conference on Learning Representations, 2019. --- **Q2. Confusion about why subsection 6.1 of infinite-width NN is placed right before subsection 6.2 of NAS** A2. Subsection 6.1 of infinite-width NN is an application of our results in theory and subsection 6.2 of NAS is an empirical application. We include subsection 6.1 to apply our bound to infinite-width NN so that we can compare it with existing generalization bounds that are tailored to infinite-width NNs. In response to your concerns, we will move this subsection to Appendix D, where we have discussed other applications of our generalization bounds to study stable algorithms and derive norm-based generalization bounds. --- **Q3. Computation time of estimating the bound** A3. Thanks for your suggestion! In Experiment (I), estimating the bound with 20 $S’$ (solving 20 gradient flow ODE) costs 500s and training NN costs 0.29s. The GPU memory required by estimating the bound is 2406MB and training NN requires 1044MB. For experiment (III), we report the averaged computational cost (GPU hours) of our approach for one architecture and the computational cost of training one NN architecture to convergence in the below table. Note our approach calculates Gene$(w, S)$ only after training for 1 or 2 epochs, which saves computational cost a lot. | GPU hours | CIFAR-10 | CIFAR-100 | |---------------------------------------------|----------|-----------| | RS + Gene$(w, S)_2$ (Ours) | 0.036 | 0.037 | | Training one NN architecture to convergence | 1.83 | 2.56 | We will report the above results in the revised paper. --- **Q4. Local Lipschitz constant can be influenced by the data or the trajectory of optimization in ways that are not captured in the analysis.** A4. We thank the reviewer for this insightful point. Indeed, the local Lipschitz constant can be influenced by the training data. To clarify, the original formulation of our bounds does not directly depend on the local Lipschitz constant. The loss tangent kernel in our bounds captures the influence of training data and the trajectory of parameters in optimization to a certain extent. Nonetheless, we do agree with the reviewer that a thorough exploration of the relationship between our bounds and the local Lipschitz constant is beyond the scope of this work. We will include a discussion in the revision. --- **Q5. Practical neural networks are typically trained with relatively large learning rates …** A5. Please refer to Q1 in the global response, where we have clarified that our experiments are conducted under practical scenarios and showed our results can be extended to the gradient descent with a finite learning rate setting. --- We hope our responses have addressed all your concerns. Please let us know if you have any further questions! --- Rebuttal Comment 1.1: Comment: Thank you for the response. It addresses my comments and I will raise the score to 7. --- Reply to Comment 1.1.1: Title: Thank you for your response! Comment: Thank you so much for your response and raising the score! We are glad to know that we have addressed your concerns. We will make sure to include the promised changes in the revision.
Rebuttal 1: Rebuttal: ### **Global Response** We would like to thank all the reviewers for taking the time and effort to review our paper! We are delighted to learn that our paper was positively received, and the reviewers found that: - the background, relevant work, and the proposed method are brilliantly presented (Reviewer Zsen); - the new generalization bound based on the loss path kernel has its value and the numerical simulation and implementation in NAS show that their analysis can be used in practice and their theorem is not vacuous (Reviewer kZTv); - this looks like a significant amount work, extending and generalising existing theory in the Neural Tangent Kernel framework (Reviewer W7Wt); - and paper follows an interesting idea of generalising tangent kernels to path integrals and it is timely and potentially interesting contribution (Reviewer kz2v). We also recognize that the reviewers are busy handling multiple papers, so their thoughtful feedback is even more appreciated. Below, we provide an answer to a common question shared by Reviewer Zsen, kZTv, and kz2v in this global response. We also address the concerns and questions raised by each reviewer and detail our plans for updating the paper. We will add the changes in the revision (both in the main text and appendix). Please don’t hesitate to let us know if you have any additional feedback or questions regarding our response. We would be happy to address any remaining concerns with you in the discussion period if any. If our responses have addressed your concerns and questions, we would appreciate it if you could kindly let us know and consider raising your review score. Thanks for your time and review! --- **Q1. Extension to practical gradient descent with a finite learning rate.** A1. We thank the reviewers for this important question. First, we would like to highlight that all our experiments have been conducted under practical scenarios. Specifically, in Experiments (I) and (II) (Section 7, line 306), we trained NNs with a relatively large learning rate (lr=10 for NTK parameterization), while the gradient flow and gradient descent overlapped well (see Fig. 2 (a)) and the bounds were non-vacuous (Fig. 2 (b) and (c), Fig. 3). Following the reviewer’s suggestion, we show below that it is possible to extend our results to gradient descent with a finite learning rate setting. In this case, the loss path kernel will be defined as a summation instead of an integration over training trajectory as follows, $$\mathsf{K}\_T(z, z') = \sum_{t=0}^{T-1}\left\langle \nabla_{w} \ell(w_{t}, z), \nabla_{w} \ell(w_{t}, z') \right\rangle.$$ Under the assumption that the loss $\ell(w, z)$ is $\beta_\ell$-smooth and convex, we can get a similar bound as the gradient flow one with an additional term that involves the learning rate $\eta$ and smoothness constant $\beta_\ell$: $$\mathcal{E}(\mathcal{S}, \eta, T) = \frac{\beta_\ell \eta^2}{2 n^2} \sum_{i = 1}^{n}\sum_{j = 1}^{n} \mathsf{K}_T(w_t; z_i, z_j).$$ When $\eta \rightarrow 0$, $\mathcal{E}(\mathcal{S}, \eta, T) \rightarrow 0$. --- **Q2 The norm constraint [$\frac{1}{n^2} \sum_{i, j} \mathsf{K}_T(z_i', z_j';\mathcal{S}') \leq B^2$, line 192] limits the datasets $\mathcal{S}'$ LTK and LPK can be considered over. How does this limit the applicability/generality of results?** A2. This norm constraint balances a tradeoff between the tightness of the bound and the expressiveness of the set $\mathcal{G}\_T$ (that is, the number of datasets $\mathcal{S}'$ over which LTK and LPK can be applied to). A small $B$ results in a tighter bound, whereas a large $B$ allows for more datasets to be covered. In an extreme case, we can choose $B^2 = \sup_{\mathcal{S}'} \frac{1}{n^2} \sum_{i, j} \mathsf{K}_T(z_i', z_j';\mathcal{S}') $ to encompass all possible $\mathcal{S}'$. Finally, it is worth noting that similar kinds of norm constraints have appeared in previous norm-based bounds [1, 2, 3, 4]. By applying a method akin to that found in [2, Lemma A.9], one can cover the parameter space and use a union bound to eliminate this norm constraint. [1] Bartlett, P. L. and Mendelson, S. Rademacher and gaussian complexities: Risk bounds and structural results. Journal of Machine Learning Research, 3(Nov):463–482, 2002. [2] Bartlett, P. L., Foster, D. J., and Telgarsky, M. J. Spectrally-normalized margin bounds for neural networks. Advances in neural information processing systems, 30, 2017. [3] Neyshabur, Behnam, Ryota Tomioka, and Nathan Srebro. "Norm-based capacity control in neural networks." Conference on learning theory. PMLR, 2015. [4] Neyshabur, B., Li, Z., Bhojanapalli, S., LeCun, Y., and Srebro, N. The role of over-parameterization in generalization of neural networks. In International Conference on Learning Representations, 2019.
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Label-efficient Segmentation via Affinity Propagation
Accept (poster)
Summary: This paper proposes a novel universal component for weakly-supervised segmentation by formulating it as an affinity propagation process. It simultaneously utilizes a global and a local pairwise affinity term to generate soft pseudo labels. An efficient algorithm is also developed to reduce the computational cost. Experiments on three label-efficient segmentation tasks demonstrate the effectiveness of the proposed method. Strengths: 1. The proposed framework uses both global and local pairwise affinity term and achieves superior performance. 2. The efficient implementation of global affinity propagation can greatly reduce computational cost. 3. The proposed approach can be conveniently plugged into existing segmentation networks. 4. The experiments are abundant, covering many label-efficient segmentation tasks. Weaknesses: 1. This approach seems parameter-sensitive. The slight variation of $\zeta_s$, $\zeta_g$ may lead to notable fluctuation of segmentation performance. Is there consistency in the parameters used across different tasks and datasets? 2. The efficiency of the whole framework is not intuitive. By introducing both global and local pairwise affinity term, is there a significant decrease in efficiency? It would be helpful if authors could present the change of training time with/without APro. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: 1. How to determine the number of iterations? 2. The runtime details in Table 7 are not clear, such as iterations, batch size, device... 3. In MaskCLIP, key smoothing and prompt denoising are proposed to refine the pseudo masks. The key smoothing also aims to realize global affinity propagation based on the similarity of key features (not used in the MaskCLIP+ setting). It would be helpful to explore their effects. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: Please refer to the Weaknesses part. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you so much for the careful and thoughtful reviews. Please find what below our itemized responses. **Q1. Whether parameters ${\zeta_s}$, ${\zeta_g}$ are consistent across different tasks and datasets.** R: The parameters ${\zeta_s}$ and ${\zeta_g}$ control the sensitivity to variations in pixel values, and they will impact the segmentation performance. While meticulous tuning of these parameters on different datasets and tasks could lead to improved results, we utilize the same values of them across all tasks and datasets: ${\zeta_s}$ = 0.15, ${\zeta_g}$ = 0.07. The details are shown in Tables A1 and A2 in the Supplementary Material. **Q2. By introducing both global and local pairwise affinity terms, is there a significant decrease in efficiency? Presenting the change of training time with/without APro.** R: We report the detailed training time below. We conduct the ablation studies on the box-supervised instance segmentation on Pascal VOC. |$\quad$Method | Training Time | &nbsp;&nbsp;AP | |:----: | :----: | :----: | |Baseline | 3h | 25.9 | | +LP | 3.5h | 36.0 | | +GP | 4.5h | 37.0 | | APro (LP+GP) | 5.2h | 38.1 | The baseline model without our APro method needs 3h to train. When adding the local operation LP and global operation GP individually, 0.5h and 1.5h additional training time are needed, respectively. When adding both of them, it costs 2.2h additional training time with our efficient implementation. However, without our efficient implementation, the training time would be unbearable. We will make it clearer in our revision. **Q3. How to determine the number of iterations?** R: We perform ablation studies to determine the number of iterations. Our goal is to implement the formulated affinity propagation process efficiently with fewer iterations. The detailed ablation experiments on the impact of varying iteration is provided in Table 6 of our main paper. **Q4. The runtime details in Table 7 are not clear, such as iterations, batch size, device...** R: Thanks for your careful comments. The batch size is set to 1, and the experiment is conducted on a single GeForce RTX 3090. The reported runtime represents the average time for one GP process testing duration of an epoch on the Pascal VOC dataset. We perform the runtime comparison under the same settings. We will make it clearer in the revision. **Q5. Explore the effects of Key Smoothing and Prompt Denosing in MaskCLIP.** R: Yes, the Key Smoothing (KS) also aims to realize the global affinity propagation. To better explore their efforts, we conduct detailed comparisons between the KS and our APro method based on MaskCLIP. The experimental results are shown in the Table below. | $\qquad$Method | CLIP Model | Context | COCO | | :------------: | :----------: | :-----: | :---: | | MaskCLIP | ResNet-50 | 18.46 | 10.17 | | +KS | ResNet-50 | 21.0 | 12.42 | | **+APro(Ours)** | ResNet-50 | **21.67** | **12.70** | | MaskCLIP | ResNet-50x16 | 21.57 | 13.55 | | +KS | ResNet-50x16 | 22.65 | 15.50 | | **+APro(Ours)** | ResNet-50x16 | **24.03** | **16.30** | | MaskCLIP | ViT16 | 21.68 | 12.51 | | +KS | ViT16 | 23.87 | 13.79 | | +KS+PD | ViT16 | 25.45 | 14.62 | | **+APro(Ours)** | ViT16 | **28.91** | **16.69** | | **+APro(Ours)+PD** | ViT16 | **29.42** | **16.71** | Both KS and our APro method bring performance gains. Compared with KS, our APro achieves better performance with different CLIP-based models. Especially, for ViT16-based model, our approach outperforms KS by +5.04\% mIoU on Pascal Context and +2.90\% mIoU on COCO, repectively. Equipped with Prompt Denoising (PD), the models could achieve further improvements. We have the following further discussions: Key Smoothing relies on the calculation of key feature similarities, which predominantly stems from high-level features of CLIP and computes pairwise terms within each pair of patches. Compared with KeySmoothing of MaskCLIP, our method is built on a tree-based graph derived from lower-level images, which is capable of reflecting finer topological details. Furthermore, we design an efficient implementation that eliminates the need to compute similarities individually, significantly reducing time complexity. We will add the above discussions in our revision, and strengthen our introduction and experimental sections. --- Rebuttal Comment 1.1: Comment: Thanks for your rebuttal. The rebuttal has addressed most of my concerns and the discussion is instructive. I will keep my original rating. --- Reply to Comment 1.1.1: Title: Thanks for your response Comment: Dear Reviewer bNsh, Thanks very much for your confirmation and your time.
Summary: This paper proposes an affinity propagation algorithm for weakly supervised segmentation. Given image segmentation data with only sparse box/point/scribbles annotations, the proposed method propagates these annotations to other pixels as pseudo masks for training. Global affinity and local affinity are proposed to capture global pairwise potential and local connectivity respectively. The authors propose an efficient implementation for the propagation of these two types of affinity which makes training practical. The final model performs better than prior works in PASCAL VOC and COCO with partial annotations. Strengths: - The proposed framework is simple and effective. Both global and local affinity propagation are intuitive, and the resultant method achieves strong performance compared to existing works without bells and whistles. This shows the effectiveness of the proposed affinity propagation mechanism. - The authors propose an efficient implementation for the affinity propagation algorithm which is crucial for making training with this method practical. The authors claim that it is five times faster than mean-field [29]. This fast algorithm is also potentially useful for future work in other directions. Weaknesses: - The affinities are defined on pixel intensity which can be limiting as it is sensitive to lighting and low-level noises. It might help to incorporate deep features from self-supervised, pre-trained networks (e.g., MAE). - Another direction for segmentation is open-vocabulary segmentation which can benefit from scaling and data engines (e.g., SAM). The proposed method does not seem to be scalable as the affinity is fixed to low-level color differences. - Missing related work and discussion: Learning Pixel-level Semantic Affinity with Image-level Supervision for Weakly Supervised Semantic Segmentation, CVPR 2018 Technical Quality: 3 good Clarity: 3 good Questions for Authors: - Will the code be released? I see no mention of this in the paper. It would help the community if an open-source version is available, especially for the non-trivial GPU implementation of affinity propagation. - When comparing speed with mean-field [29], are both algorithms running on GPU? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Limitations are mentioned in Section D in the supplementary material. The authors mention that the use of only image intensity to compute affinity is a limitation which I agree with. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you so much for the constructive and insightful comments. Please find what below our itemized responses. **Q1. About incorporating deep features from self-supervised, pre-trained networks (e.g. MAE).** R: We have tested the incorporation with the deep features of the network during training. It is unstable to achieve reasonable performance in our experiments. In the future work, we will consider to incorporate deep features from the pre-trained MAE, DINO/DINO-V2 models, as you mentioned. These pre-trained models have exhibited strong generalization capability, and we believe they could make our method more powerful. **Q2. For open-vocabulary segmentation, the proposed method does not seem to be scalable as the affinity is fixed to low-level color differences.** R: Yes, our approach may not perform well in the task of open-vocabulary segmentation. As you suggested, we could incorporate the pre-trained models such as DINO/DINOv2, MAE, CLIP or Stabel diffusion to improve the scalability of our method. In addition, our method can be extended to multimodality-based affinities, i.e., between vision and language, to achieve more accurate vision-language alignment for VL tasks. This is an interesting research direction for our further work. **Q3. Missing related work and discussion of the CVPR2018 work.** R: The work [1] (CVPR2018) also involves affinity modeling operations, while the formulations and pipelines of them are different from our approach. Especially, the work [1] adopts the sparse random walk for long-range operations, which relies on the affinity transition probability. Different from it, our method constructs the Minimum Spanning Tree (MST) on the whole image and performs the formulated affinity propagation process. To reduce the computation cost, we deliberately devise a Lazy Propagation scheme for fast implementation. Please kindly refer to our **responses to Reviewer bFj5** for more comprehensive analyses and comparisons with the other existing methods. [1] Learning Pixel-level Semantic Affinity with Image-level Supervision for Weakly Supervised Semantic Segmentation, CVPR 2018. **Q4. Will the code be released?** R: For sure, we will release our full source code to make contributions to the community, including the GPU implementation of affinity propagation and complete codes on each task, so that peers can easily reproduce our results. **Q5. When comparing speed with mean-filed, are both algorithms running on GPU.** R: Yes, both methods are performed on the GPU device. The original mean-field [29] is based on the pixels of the whole image, and we compare the GPU version [4] of mean-filed with local kernel under the same settings. We will make it clearer in the revision. --- Rebuttal Comment 1.1: Comment: Thank you for the response. It addressed my concerns and I have no further questions. --- Reply to Comment 1.1.1: Title: Thanks for your response Comment: Dear Reviewer 8pug, Thanks very much for your confirmation and your time!
Summary: This paper develops a weakly-supervised segmentation framework based on affinity propagation. It overcomes the drawback of simply modeling neighboring pairwise potentials and proposes both global and local affinity terms to generate pseudo labels. The authors demonstrate the effectiveness of the proposed method in box-supervised instance segmentation, point/scribble-supervised semantic segmentation, and CLIP-guided semantic segmentation tasks. Strengths: Originality: The paper proposes two kinds of pairwise affinity propagation which are novel for weakly-supervised image segmentation settings. The global affinity propagation employs a minimum spanning tree to remove the edge with a large distance to obtain the tree-based sparse graph, which explicitly captures long-range dependency, while local affinity propagation leverages the conventional Gaussian kernel. The combination of the two enhances the performances based on Table 4. Quality: The paper itself is self-contained and includes sufficient experimental setup and details. Clarity: The paper is well-written but it might be hard to follow. It would be hard to understand the context of the proposed method for my initial reading and still creates some confusion after repeated reading. Figures are suggested to enhance the clarity of certain concepts developed in this paper. Significance: I give high significance to this paper, as it proposes an effective and convincing method for a well-known and challenging problem. Weaknesses: Personally, the biggest concern of this paper is the clarification. Basically, the authors devised two affinity scheme to create long-range and short-distance dependencies and both of them uses the same framework as described in Equ. 2. However, it would be very hard to understand the details of the global one. Authors are suggested to make a graphical illustration to demonstrate the idea. Also, it still remains confusion about the method, as will be asked in the next section. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: It would be appreciated if the authors can kindly reply to my questions and concerns. I will consider raising my evaluation. - The whole framework of affination is defined on the classic CRF model, as cited in line 110. However, it is noted that in [33], the energy function is the sum of unaries and the sum of pairwise potentials. The CRF model in Equ. 2 turns out to be the sum of products between unary and pairwise terms. If understand correctly, this formulation can be interpreted as the weighted sum of unaries defined on neighbor pixels, with weights calculated from pairwise terms. I am wondering how the authors draw an equivalence between the two formulations. - What is the $\mathcal{L}_{g}$ used in Equ. 1? is it a partial cross-entropy? how is it defined with non-annotated pixels? The unary used in Equ. 2 is then the output distribution of only labeled pixels or all pixels? - In 3.2.1, the node in line 127 is defined as a pixel right? why the set of edges can have N-1 elements, as defined in line 132 - How do authors set the degree of similarity defined on line 143? is it unique for all nodes? - Can the authors explain why the tree-based model preserves topology? - For the local affinity propagation, I always consider Equ. 6 as a sum of neighbor pixels weighted by the intensities, similar to the conventional CRF model [33]. However, in [33], a spatial smoothness term is added. The authors want to explain how their method preserves spatial smoothness. - Based on Equ.6 I don't see any necessity of iterating it (there are even no iterative parameters inside). What does it mean by saying iterating local affinity leads to better performance? - what is the transmission cost? - In 3.2.1 and 3.2.1, the authors then obtain two sets of pseudo labels, $y^{g}$ and $y^{s}$. How do they exactly train the network with these two sets of labels? How to merge them when using them together? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: The authors adequately addressed the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you so much for acknowledging the strength of our method. We have tried our best to clarify each issue. Please find what below our itemized responses. **Q1. The details of the global one and making a graphical illustration.** R: To facilitate comprehension, we provide a detailed graphical illustration **(Figure 2 in the PDF)** to describe our global affinity propagation process. Initially, an input image is represented as a 4-connected planar graph. Subsequently, the Minimum Spanning Tree (MST) is constructed based on the edge weights to obtain the tree-based graph $\mathcal{G}\_T$. $\psi_g(x_i,x_j)$ is calculated as $exp(-d)$, where $d$ is the maximum value along the path $E_{i,j}$ from node $x_i$ to node $x_j$. This pairwise similarity $\psi_g(x_i,x_j)$ is then multiplied by the unary term $\phi(x_j)$ to obtain soft pseudo predictions $y_i^g$. Note that Figure 2 serves purely as a visual illustration to understand our method. In the implementation, it is unnecessary to compute $\psi_g$ explicitly. As detailed in Section 3.3, we alternatively design a lazy propagation scheme to efficiently update these values. **Q2. About the formulations of Equ.2 and the classic CRF model.** R: Yes, you are right. Our definition of the affinity propagation process does not strictly adhere to the classic CRF model[29]. The energy function in [29] consists of the sum of unary terms and pairwise potential; however, the problem solving process is rather complex. As our aim is to obtain a refined pseudo label, we adopt the general concept of unary and pairwise terms, with the intention of integrating their benefits in a more accessible and straightforward manner. **Q3. About the $\mathcal{L}_g$ used in Equ.1 and the unary used in Equ.2.** R: The definition of $\mathcal{L}_g$ depends on the form of supervision. For point or scribble forms, the sparsely labeled region lies in the object, which is suitable for partial cross-entropy loss. It has no impact on non-annotated pixels. While, for labeled bounding box supervision, it is uncertain whether the pixels within the box truly belong to the object. So we adopt box projection loss, which constraints the predictions within the labeled box. In our paper, we listed the loss function for each unary term in the section of Implementation Details (lines 210, 245-248) for different weakly-supervised tasks. Moreover, the unary term used in Equ. 2 is the network prediction of all pixels. We will make our revision clearer. **Q4. About node in line 127 and N-1 edges.** R: Yes, the node corresponds to a pixel. The constructed MST is an acyclic subgraph of the original 4-connected planar graph that includes all vertices. A spanning tree connects all nodes of an image, and there is a unique and simple path between any two nodes. Therefore, the MST-based graph with N vertices requires N-1 edges to maintain connectivity without cycles. **Figure 2 in the PDF** also gives a simple illustration of a constructed MST. **Q5. About the degree of similarity defined in line 143.** R: The degree of similarity $\zeta_g$ is a hyper-parameter in our approach. We conduct ablation study on it in Table A1 of the supplementary material and select $\zeta_g$ = 0.07, which remains constant for all nodes. **Q6. Explain why the tree-based model preserves topology.** R: Our approach initially represents an image as a 4-connected planar graph with pixel similarity measured via the edge weight of adjacent nodes. The MST is constructed by edge pruning, which preferentially preserves edges of smaller weight, i.e., adjacent vertices with higher pixel similarity. Similar pixels are usually located inside or on the surface of an object, whereas larger pixel differences appear across distinct objects. In other words, there are more edges within an object and fewer edges connecting different objects. Thus, the MST can capture an image's topological structure. For better comprehension, we provide two visual examples in **Figure 4 of the PDF**. **Q7. How the local affinity propagation preserves spatial smoothness?** R: In the conventional CRF model, the pairwise potentials involve relationships between each pixel and all the other pixels, necessitating the inclusion of a spatial smoothness term to maintain spatial coherence. While our local affinity propagation method applies pairwise affinity within a local region surrounding each pixel, such as a 3x3 or 5x5 kernel. This consideration of local domain pixels implicitly signifies spatial smoothness, indicating that nearby pixels likely belong to the same class. The detailed experimental comparison is provided below. Incorporating Spatial Position into our LP yields no performance improvement while bringing an additional hyperparameter. |$\qquad$Method|AP| |:-:|:-:| |LP|36.0| |LP + Spatial Position|34.8| **Q8. About the iteration of the local affinity.** R: Thanks for pointing out this issue. Upon obtaining the refined pseudo label $y_i^s$, we treat it as a new unary term $\phi(x)$ to iterate it. Table 6 in the main paper demonstrates the effectiveness of the iterating process. **Q9. What is the transmission cost?** R: In Section 3.3, we define the maximum $w$ of the path through any two nodes in the tree-based graph as the transmission cost $C$. For example, in the green dashed box of **Figure 2 in the PDF**, the transmission cost of $x_0$ and $x_3$ is 3*${\zeta_g}^2$. Section A.1 in our Supplementary Material also provides detailed proof of transmission cost. **Q10. How to train the network with labels $y^g$ and $y^s$?** R: We assign each $y_i$ from GP and LP to the network prediction $p_i$, and employ the distance measurement function as the objective for unlabeled regions $\Omega _u$. Simple L1 distance is empirically adopted in our implementation. We described it in Section 3.1 (line122-line124) of the main paper. We will make it clearer in the revision.
Summary: This paper proposes an affinity propagation method within local and global perspectives to improve the pseudo labels generated by the model for the parts without GT masks. Meanwhile, they then propose an efficient implementation to solve the heavy computation by graph modeling. The authors conduct experiments on several benchmarks to showcase their improvements on point/scribble/bbox-level weakly-supervised semantic segmentation, acting as a plug-in module to enhance their segmentation performance. And competitive performances are obtained naturally. Strengths: 1. The writing of this paper is easy to follow. 2. Extensive experiments are studied and analyzed to present the proposed method. 3. The final performances are competitive. Weaknesses: 1. To my concern, affinity propagation exploration has been studied widely in the weakly-supervised learning community, such as [1]. The overall novelty of this paper sounds limited because their method has a lot of overlappings with previous method, though the author adapt a graph modeling to analyze and model the propagation procedure. It is important to demonstrate the main differences from the proposed method with others. [1] Learning Affinity from Attention: End-to-End Weakly-Supervised Semantic Segmentation with Transformers, CVPR2022 2. From the eq3 and eq5, it is hard to get the point, of how they define the global and local respectively since the main differences lie in that a max operation is performed in eq3 and the points-set in eq5 is local but not detailed clearly. 3. The motivation for adapting Gaussian to model the local part in this paper should present more reasonably, for example, by giving some experiments directly to see that the Gaussian methods indeed capture the local receptive features. 4. Despite the point/scribble/bbox-level weakly supervised semantic segmentation, how about the results when performing on semi-/image-level supervised semantic segmentation tasks to validate that the proposed method indeed helps to improve the soft-pseudo label for the unlabeled pixels? 5. Some important references are missing below while these methods may discuss in the paper too. [2] Reducing Information Bottleneck for Weakly Supervised Semantic Segmentation, NeurIPS 2021 [3] Expansion and Shrinkage of Localization for Weakly-Supervised Semantic Segmentation, NeurIPS 2022 [4] Revisiting Weak-to-Strong Consistency in Semi-Supervised Semantic Segmentation, CVPR2023 [5] Semi-Supervised Semantic Segmentation With Error Localization Network, CVPR2022 Technical Quality: 3 good Clarity: 3 good Questions for Authors: Please refer to the weakness part. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: Despite the limited novelty in this phase, this paper can be stronger when the authors well address my questions above. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you so much for the thoughtful comments. Please find what below our itemized responses. **Q1. Main differences with the existing approaches.** R: Different from the existing approaches, our method considers object topology and captures fine-grained global affinity through an efficient implementation. Unfortunately, the existing methods fall short in achieving such an objective. Besides, our method is a general plug-in module for various label-efficient segmentation tasks, which does not require any modification on the network itself. Extensive experiments demonstrate the effectiveness of our approach in generating high-quality pseudo mask predictions across different tasks and datasets. Please kindly refer to **our responses to reviewer bFj5** for more comprehensive analyses on the existing methods, including AFA [1]. [1] Learning Affinity from Attention: End-to-End Weakly-Supervised Semantic Segmentation with Transformers, CVPR2022. **Q2. About the differences between the global and local in Eq.3 and Eq.5.** R: The formulations of global (Eq.3) and local (Eq.5) have the similar form, while they are defined in different scopes. In detail, their receptive fields for affinity propagation are totally different. The global pairwise term contains all the nodes $ \mathcal{V}$ through the tree-based graph $\mathcal{G}_T$, which are implemented by the Minimum Spanning Tree. In contrast, the local pairwise term is limited to a local area, such as 3x3 or 5x5, which is implemented by the Gaussian kernel. To better illustrate the detailed process of the global and local operations, we provide **Figure 1 in the PDF**. For visual comparisons of pairwise affinity maps, Figure 3 in the main paper also shows the global and local affinity maps, denoted as "GP" and "LP", respectively. In calculating the global pairwise term, we present the distance-insensitive max affinity function to ensure that the similarity does not diminish abruptly with the increase of distance along the path of the spanning tree (see **Figure 2 in the PDF** for the details). **Q3. The motivation for adapting Gaussian to model the local part should present more reasonably. Giving some experiments directly to see.** R: Spatially adjacent pixels are more likely to share the same label, which inspired us to define the local Gaussian kernel within a fixed window, such as 3x3 or 5x5. Such a neighboring receptive field allows the focus to be placed on local texture and shape features rather than global ones. Intuitively, we present the visualizations in **Figure 3 in the PDF** to demonstrate the effectiveness of this local pairwise term. One can see that the predictions become smoother after local affinity propagation (LP), indicating an enhancement in local consistency by capturing the local receptive characteristics. **Q4. How about the performance on semi-/image-level supervised semantic segmentation?** R: Thanks for your suggestions. We have conducted the image-level semantic segmentation task on Pascal VOC2012 dataset based on the framework of AFA [1]. The comparison results are shown below. Our method can further obtain +1.5\% mIoU gain over AFA[1]. |&nbsp;&nbsp;&nbsp;Method | dataset | $\quad$&nbsp;&nbsp;&nbsp; mIoU | |:-:| :-: |:-:| | AFA[1] | VOC2012 | 62.6 | | +APro(Ours) | VOC2012 | $\qquad$64.1($\uparrow$1.5) | We will perform more experiments on recent image-level methods, such as ESOL(NIPS2022) and ToCo(CVPR2023), as well as semi-supervised approaches [4][5], and add the results into the revised manuscript to further show the effectiveness of our method. **Q5. Some important references are missing below while these methods may discuss in the paper too.** R: Thanks for your suggestions. Methods [2] and [3] are about image-level supervised semantic segmentation methods, while [4] and [5] are related to semi-supervised semantic segmentation. Specifically, RIB [2] adopts the information bottleneck principle to interpret the partial localization issue in trained classifier. ESOL[3] proposed a new training pipeline, with a "Divide-and-Conquer" manner to address the partial localization issue of the CAM method by introducing a deformable transformation operation. It is worth noting that we have already cited [3] in our manuscript. On the other hand, ELN [5] aims to deal with errors on pseudo labels and UniMatch [4] presents the weak-to-strong consistency regularization framework from FixMatch. We will cite and discuss these works in our revised manuscript. &emsp; [2] Reducing Information Bottleneck for Weakly Supervised Semantic Segmentation, NeurIPS 2021. [3] Expansion and Shrinkage of Localization for Weakly-Supervised Semantic Segmentation, NeurIPS 2022. [4] Revisiting Weak-to-Strong Consistency in Semi-Supervised Semantic Segmentation, CVPR2023. [5] Semi-Supervised Semantic Segmentation With Error Localization Network, CVPR2022. --- Rebuttal Comment 1.1: Title: Rebuttal Response Comment: I would like to thank the authors' detailed response and more explanation. To my perspective, most of my concerns are well addressed. And I would like to suggest that the authors conduct more convincing results on other segmentation-related tasks to further enhance their plug-in role. Overall, I will raise up my initial score to 'borderline accept' and suggest acceptance. --- Reply to Comment 1.1.1: Title: Thanks for your response Comment: Dear Reviewer 4J6P, Thanks very much for your positive feedback and further suggestions! We'll include more convincing results in our revision as you suggested.
Rebuttal 1: Rebuttal: **General Response** We express our gratitude to all reviewers for their insightful comments, which significantly strengthen our paper. We will revise our manuscript accordingly. As three out of the five reviewers concern the differences between our work and some existing works[1-3], we'd like to first clarify that our formulation and process are different from them. The works in [1,3] are based on the random walk and local operation, and [2] utilizes the sparsely weighted graphs with a GNN-based affinity attention module. In contrast to them, our presented method models the semantic affinity through the formulated affinity propagation processes, which takes advantage of both the global affinity model using the Minimum Spanning Tree as well as the complementary local one with Gaussian kernel. Besides, our work aims to devise a general and efficient plug-in module for various label-efficient segmentation tasks without the need to modify the network itself. Compared with the existing methods, our approach is able to capture fine-grained affinity to generate accurate mask labels. For a detailed discussion and performance comparisons, please kindly refer to our **Response to Reviewer bFj5**. To reproduce our results, we will release the full source code of our presented affinity propagation with efficient implementation for each label-efficient task. In the following, we address the specific concerns point by point. The corresponding figures and captions are included in **the submitted PDF file**. Please feel free to check. &emsp; [1] Learning Affinity from Attention: End-to-End Weakly-Supervised Semantic Segmentation with Transformers, CVPR2022. [2] Affinity Attention Graph Neural Network for Weakly Supervised Semantic Segmentation, TPAMI2021. [3] Learning Pixel-level Semantic Affinity with Image-level Supervision for Weakly Supervised Semantic Segmentation, CVPR2018. Pdf: /pdf/d7ba95e953b81a7e40d234152a3cb8ce8f69de3f.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: This paper utilizes local and global pairwise affinity terms to generate accurate soft pseudo labels and incorporates an efficient algorithm to reduce computational costs. Experimental results demonstrate the approach's superior performance in various segmentation tasks. Strengths: Experimental results demonstrate the approach's superior performance in various segmentation tasks. The paper is easy to understand. Weaknesses: The affinity methods are commonly used in weakly supervised segmentation tasks, such as 'Learning Affinity from Attention: End-to-End Weakly-Supervised SemanticSegmentation with Transformers Affinity Attention Graph Neural Network for Weakly Supervised Semantic Segmentation. Learning Pixel-level Semantic Affinity with Image-level Supervision for Weakly Supervised Semantic Segmentation Weakly Supervised Learning of Instance Segmentation with Inter-pixel Relations.' The novelty of their method might be limited compared to existing affinity-based approaches. To stand out, the author should clearly highlight the differences between their method and the mentioned ones. It could be in terms of the formulation of the affinity modeling task, the incorporation of local and global pairwise affinity terms, the generation of accurate soft pseudo labels, or the development of an efficient algorithm to reduce computational costs. Providing a thorough comparison with these existing methods would help readers understand the unique contributions of the proposed approach and its advantages over previous approaches. By highlighting these differences, the paper can demonstrate why their method is valuable and relevant in the context of weakly supervised segmentation tasks. Technical Quality: 3 good Clarity: 3 good Questions for Authors: To make their contribution more apparent, the author should provide a comprehensive comparison of the proposed method's performance with previous approaches of the generated pseudo ground truth. This comparison would highlight the advantages and improvements of their approach over existing methods in terms of accuracy, efficiency, and other relevant metrics. By including a thorough analysis of the pseudo ground truth performance, readers can better understand the strengths of the proposed approach and how it outperforms or complements existing techniques. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: See Weaknesses Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you so much for the constructive comments. Please find what below our itemized responses. **Q1. The comprehensive analysis and comparisons with the existing affinity-based methods.** R: Previous works [1-4] involve affinity modeling, and our proposed approach is largely different from them in formulation and pipeline. Specifically, [1] adopts the random walk and local pixel-adaptive refinement (PAR) to propagate the affinity. [2] introduces the Affinity CNN to convert images into graphs, using a GNN-based affinity attention module for affinity modeling. [3,4] leverage the random walk to capture long-range affinity with local operations. While they struggle with fine-grained global semantic affinity, hindering the generation of accurate pseudo-mask labels. As noted in [3], detailed global affinity modeling requires substantial computational costs. In contrast, we present the unified affinity propagation formulation for both global and local affinity modeling. The global affinity utilizes the acyclic Minimum Spanning Tree to perform fine-grained affinity propagation, which considers all nodes for a comprehensive context understanding with object topology. On the other hand, the local term employs the kernel-based propagation in a complementary role, which focuses on the nearby area to attain spatial smoothness. To efficiently perform this procedure, we deliberately design a Lazy Propagation scheme for fast implementation, which avoids excessive computations. Furthermore, we devise a general plug-in module for various label-efficient segmentation tasks, which does not require any modification of the network itself. In comparison, the previous works[1][2][3][4] are mainly designed for single weakly-supervised segmentation tasks with some customized designs. To demonstrate the effectiveness of our method, we further conduct detailed performance comparisons with the recent works [1][2]. Firstly, we implement our method based on the AFA[1] framework for image-level supervised semantic segmentation on Pascal VOC 2012. The experimental settings are the same as [1] for fair comparison. The results are reported below. |&nbsp;&nbsp;&nbsp;Methods | dataset | $\quad$&nbsp;&nbsp;&nbsp; mIoU | |:-:| :-: |:-:| | AFA[1] | VOC2012 | 62.6 | | +APro(Ours) | VOC2012 | $\qquad$64.1($\uparrow$1.5) | We notice that the local pixel-adaptive refinement (PAR) in AFA [1] is similar to our proposed local affinity term (LP). Both of them are kernel-based methods based on the input image, however, their formulations and implementations are different. In particular, we compare our method with PAR [1] in weakly box-supervised instance segmentation settings on VOC2012 and COCO. | $\qquad$ Methods | &nbsp;&nbsp;dataset | $\quad$&nbsp;&nbsp;AP| $\quad$AP_50 | $\quad$AP_75 | |:--: | :--: | :--: | :--: | :--: | |Baseline +PAR[1]| VOC2012| 34.6 | 63.6| 33.9 | |Baseline + LP(Ours)| VOC2012| 36.0 ($\uparrow$1.4) | 64.3($\uparrow$0.7) | 35.6 ($\uparrow$1.7)| |Baseline + APro(Ours)| VOC2012| 38.1 ($\uparrow$3.5) | 66.1 ($\uparrow$2.5)| 39.1 ($\uparrow$5.2) | |Baseline + PAR[1] | COCO| 30.5 | 53.1 | 30.6 | |Baseline + LP(Ours) | COCO| 31.6 ($\uparrow$1.1) | 53.2 ($\uparrow$0.1) | 32.2 ($\uparrow$1.6) | |Baseline + APro(Ours) | COCO| 33.0 ($\uparrow$2.5)| 55.2 ($\uparrow$2.1) | 33.6 ($\uparrow$3.0)| Compared with PAR [1], our approach obtains consistent performance gains. Secondly, we compare our method with A$^2$GNN[2] under its designed point/scribble-supervised semantic segmentation and box-supervised instance segmentation settings. Methods | backbone |Supervision|Multi-stage | CRF | mIOU | |:----: | :----: | :----: | :----: | :----: | :----: | |A$^2$GNN[2] | DeeplabV2 | Point| √ | √ | 66.8 | |APro(Ours)| Tree-FCN | Point| x | x | 67.7 | |A$^2$GNN[2]| Tree-FCN | Scrrible| √ | √ | 76.2 | |APro(Ours)| Tree-FCN | Scrrible| x | x | 76.6 | Though A$^2$GNN has achieved competitive results, it needs multi-stage training and CRF refinement in the post-processing to yield accurate mask predictions. In contrast, our method is an end-to-end training framework without the CRF post-processing. Regarding the weakly box-supervised instance segmentation task, our APro outperforms A$^2$GNN by a large margin, with +15.3\% AP_75 on VOC2012 and +13.4\% AP on the large-scale COCO dataset. | Methods | dataset | backbone | AP| AP_50 | AP_75 | |:----: | :----: | :----: | :----: | :----: | :----: | | A$^2$GNN[2] | VOC2012 | r101 | - | 59.1 | 27.4 | | APro(Ours) | VOC2012 | r50 | 38.1 | 66.1 | 39.1 | | APro(Ours) | VOC2012 |r101 | **40.6** | **68.5**| **42.7** | | A$^2$GNN[2] | COCO | r101 | 20.9|43.9 |17.8| | APro(Ours) | COCO | r50 |33.0|55.2|33.6| | APro(Ours) | COCO | r101 |**34.3**|**57.0**|**35.3**| The above results indicate that our method is able to model the fine-grained affinity and obtain accurate mask predictions across different label-efficient segmentation tasks. We will incorporate the above discussions in our revised manuscript. &emsp; [1] Learning Affinity from Attention: End-to-End Weakly-Supervised Semantic Segmentation with Transformers, CVPR2022. [2] Affinity Attention Graph Neural Network for Weakly Supervised Semantic Segmentation, TPAMI2021. [3] Learning Pixel-level Semantic Affinity with Image-level Supervision for Weakly Supervised Semantic Segmentation, CVPR2018. [4] Weakly Supervised Learning of Instance Segmentation with Inter-pixel Relations, CVPR2019.
null
null
null
null
null
null
Towards Automated Circuit Discovery for Mechanistic Interpretability
Accept (spotlight)
Summary: This paper first presents an overview and a useful distillation of existing mechanistic interpratility work on discovering interpretable circuits in transformer models. They say most existing work happens in 3 steps: 1. Observe a behavior (or task) that a neural network displays, then create a dataset to measure this behavior 2. Define the scope of interpretation: Do we want to look at which attention heads, mlp layers or individual neurons are important. 3. Perform a search with patching experiments to remove as many unnecessary components as possible They then propose an algorithm ACDC to automate step 3 of this process, which has typically required extensive manual effort by researchers. They evaluate this extensively by applying it on existing circuits found by researchers, finding that it can discover existing circuits with good accuracy and outperform baselines. Strengths: - Very important/impactful problem - Good survey of existing work in a very new topic, bringing important clarity/systemization to the workflow of these otherwise individual findings. - Clear writing - Extensive and comprehensive evaluation - Good performance with the automated method - Proposed systemization and automated method to discover circuits will likely greatly improve the speed at which new discoveries can be made in this field, making it easier for new researchers to approach. Weaknesses: - Choice of threshold parameter tau seems inconsistent/varies by task. Unclear how I would choose tau when applying this method on a new task. - Does not address how to come up with the task and dataset for step 1 which may be the hardest part of the workflow Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: - How did you choose tau for different experiments? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 4 excellent Limitations: Yes, very good discussion. Flag For Ethics Review: ['No ethics review needed.'] Rating: 9: Very Strong Accept: Technically flawless paper with groundbreaking impact on at least one area of AI/ML and excellent impact on multiple areas of AI/ML, with flawless evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your helpful feedback on our work! We appreciate the concise distillation of our identified three steps of the mechanistic interpretability workflow and how you highlighted the diverse strengths of our work. This response is intended to address both of the weaknesses you brought up regarding our paper. > Does not address how to come up with the task and dataset for step 1 which may be the hardest part of the workflow Overall, we don’t think that selecting tasks and datasets (Step 1 of our workflow) is likely to be as difficult or time-intensive as patching and subgraph-finding (Step 3, which we automate). For example, the IOI and Greater-Than papers used datasets consisting of at most 100 prompts that they could concisely describe, but they report much more extensive patching experiments. It is possible that in future for more challenging tasks it will be harder to design datasets, but currently we think that our contribution automates most of the identified mechanistic interpetability workflow. > Unclear how I would choose tau when applying this method on a new task Thank you for the feedback, and this is a valid concern. However, we have not found this an issue yet, particularly compared to the alternative circuit recovery methods. ACDC is an iterative rather than an end-to-end algorithm. This means that we can observe the subset of the subgraph that ACDC has recovered when only one node has been iterated over (which typically takes <1% of the total number of iterations). Therefore practitioners can use the number of recovered nodes as inputs to the output node as an approximation to the number of nodes that will be recovered in the entire ACDC process. A good example of this working is in Figure 13 in the gendered pronouns use case. We will highlight how practitioners have dealt with the $\tau$ parameter choice in an additional paragraph in the gendered pronouns appendix to describe this workflow. In the main text, we performed sweeps with logarithmic spacing between choices of $\tau$, which we will also detail in the release of results in the open source implementation. --- Rebuttal Comment 1.1: Comment: Thanks for the response! This mostly addresses my concerns, and after reading the other reviews I found no significant concerns and would like to stand by my original score.
Summary: This paper introduces a method for pruning nodes in a computation graph that is meant to be used in the context of mechanistic interpretability, i.e., to find sub-graphs that explain/reproduce certain behavior of the overall graph while being much smaller. Strengths: - The problem is well-motivated, and the method's description is easy to understand. - The proposed method is compared against baselines in the form of existing pruning methods that got adapted to the context of mechanistic interpretability. Weaknesses: - While the authors mention that there is early evidence that their method can produce new insights, I believe these should be highlighted more prominently. The practical relevance of this approach can be better shown by applying this method to a new setting, producing mechanistical interpretability hypotheses for some network(s), and then verifying these post-hoc. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: - L63: This sentence can be ambiguous for readers unfamiliar with this topic: Do the authors mean that one creates a dataset to train a new model that one can analyze? - L93: Add a reference to explain what is meant by "tracr". - L125ff: It can be a bit confusing that the authors here mention that they automate all but the last steps in this work, while before and after, they say they only want to automate the third step. - L133f: What happens if the network implements a mechanism using redundant features and an or-operation? Then the detected circuit will only include one of the two valid sub-circuits (as only removing one does not impact the model much, but removing both drastically impacts it). Depending on what one is interested in investigating, this might not be an issue, but this should be discussed. - L133f: Is this guaranteed to find the sparsest graph, or can premature pruning of connections at the end prevent the pruning of other connections that, all things considered, contain much more connections/components? For example, think about a situation where two parallel branches have different purposes but are restricted to the dataset in question, they behave identically. Both of them can be pruned away, but if the size of the branches differs, it matters which one is removed. - Figure 4: What does this figure tell us? Adding a small description of what is shown here and a conclusion will make this more accessible to readers. - L315f: Can the authors propose any reasonable strategy for automatically tuning/setting this hyperparameter? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: The authors addressed most limitations in the main part of the paper, except that point raised in the Questions section. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you very much, pJGB, for your comments on our work! We hope that this reply answers all your questions, and look forward to further discussion. **New insights from the method should be featured more prominently.** We agree that showing the practical relevance of new methods is important evidence for their usefulness. We will add an extra sentence in the conclusion highlighting that practitioners have begun to verify the hypotheses generated by ACDC, which we have included at the end of this response. We will also add a paragraph to the gendered pronoun (Appendix I) on how the hypothesis generated thanks to ACDC was faithful to the model’s computation; also attached at the end of this response. **What does Figure 4 tell us?** We have added a clearer figure (attached in the general comment PDF) with a better caption, and will clarify in the main text. We used held-out i.i.d test data for this figure rather than the same data the circuit discovery methods used. The takeaway is that the recovered circuits get close to behaving like the full model with few edges. **Do the authors mean create a dataset and train a new model to analyze?** No, we use “dataset” to refer to prompts that show that a model has a particular behavior. We do not update the models’ weights. We will add a clarification to the main text. **"tracr" reference; automating all or one step?** Thank you for pointing out these mistakes! Tracr should have a reference on first appearance, and we only automate the third step. **Redundant features and OR-operation, ACDC will include one subcircuit.** This is a very good point, and it is correct. As a concrete example, consider a NN implementing an OR gate, where both inputs to it are set to the same value (both 1 or both 0). ACDC will recover exactly one of the OR gate's inputs: first it will attempt to remove one of them, and see that the circuit behavior is unchanged; then removing the second connection will impact the behavior, and it is recognized as important. SP should behave in the same way. In contrast, HISP should not recognize any of the inputs as important. The gradient of the output with respect to OR gate inputs should be zero, since both inputs are held to 0 or 1. We have conducted an experiment showing this, and put the recovered circuits in the general response PDF. We set the weights of a small ReLU transformer by hand, so it implements an OR gate. We then apply each of the algorithms, and they behave as expected: ACDC and SP find exactly one input, and HISP does not find any inputs to such an OR gate. We have included the key figure in the general comment PDF. We have written an Appendix for this experiment, which expands on the three previous paragraphs. In it, we discuss how future work on automating circuit discovery could deal with this issue. For example, ACDC-like methods could run several times, shuffling the parent nodes. Each run will recover a different input to the OR gate. The recovered circuit is then the union of what is recovered in all runs. Gradient descent methods like SP can run with different random seeds. Thank you for finding this interesting property! **Two parallel branches of different size.** This case seems similar to the OR gate example, where circuit recovery algorithms behave differently. We will include it in the OR-gate Appendix. **Sparsest graph guarantee?** None of the algorithms is guaranteed to find the sparsest graph or the branch with the largest effect size. Both ACDC and SP get stuck in local optima, though in practice SP seems to get stuck less often due to the smooth objective and gradient-based optimization. We will mention this and the previous point in the OR-gate appendix, too. **Can the authors propose any reasonable strategy for automatically tuning/setting this hyperparameter?** As also discussed with reviewer aEyk in more detail, in practice the iterative nature of ACDC makes selecting an appropriate parameter easier than expected. In short, the number of edges ablated early in ACDC runs can be used as a proxy for the total proportion of edges ACDC ablates. We will mention the issue in our paper by explaining more clearly how practitioners did not have trouble setting $\tau$ in real-world use cases (Figure 13 has similar sparsity at the output node to all other locations in the network). We also the open source implementation of ACDC should let the community quickly develop better strategies to tune $\tau$. --- ## Addendum: Additions To The Paper **Addition to maintext (Line 309):** Further, there is early evidence of the use of ACDC to help with novel interpretability work, discovering a surprising outline of a subgraph of GPT-2 Small that predicts gendered pronoun completion, where practitioners have used ACDC to generate a circuit outline of the most important pathway through a model's computation, and checked that this reflects the model's computation in normal (unablated) forward passes. **Addition to Appendix I (Line 834):** ACDC's output shows that the important internal information flow for predicting the expected gender has three steps. Firstly, Layer 0 attention head 0.4 and MLP0 use the name embedding which they pass (through intermediary MLPs) via key- and value-composition (Elhage et al., 2021) to attention heads 4.3 and 6.0. Secondly, heads 4.3 and 6.0 attend to the name position to compose with 0.4 and MLP0. Finally, through value-composition with attention heads 6.0 and 4.3 (via MLP7), the outputs of 10.9 and 9.7 output the expected gendered completion to the output node. Anonymous (2023) then verified that in a normal forward pass of GPT-2 Small, 0.4 has an attention pattern to itself at the name token, attention heads 4.3 and 6.0 attend to the previous name token, and 10.9 and 9.7 attend to the " is" token. They also perform path patching experiments on intermediate nodes to provide further evidence of the importance of the pathway through the " is" token. --- Rebuttal Comment 1.1: Comment: Thank you for your detailed response and explanations. I'm happy to recommend accepting this paper and will increase my score accordingly.
Summary: This paper proposes an approach for the automatic discovery of circuits (ACDC) in artificial neural networks (applied to transformer-based LLMs), which works by recursively constructing a subgraph of "important" nodes identified through the patching of model activations on datapoints relevant to a specific task (the choice of which is, in general, non-trivial). The authors demonstrate, through many experiments, that ACDC is mostly able to faithfully recover circuits which were manually identified by previous researchers on a variety of tasks (notably Python docstrings, IOI and induction heads), thus automating a highly labor-intensive part of the circuit discovery process. Additionally, they explore choices of patching value, target metric and threshold value, whilst also demonstrating that other comparable methods for distillation/subgraph isolation are not as well-behaved as ACDC. In addition to the paper, the authors also release an open-source implementation of ACDC which has already been applied to some success by other mechanistic interpretability researchers already. Strengths: The methodology for automatic circuit discovery proposed by the paper extends previous approaches for activation patching to automate otherwise labor-intensive mechanistic interpretability work. This in of itself is not a significant novelty, but the paper's strength lies in a thorough experimental investigation of the benefits of ACDC over other subgraph discovery methods, coupled with new methodological insights on how best to perform activation patching. In particular, they provide two novel findings: 1) KL Divergence is more well-behaved than logit differences when performing activation patching for circuit discovery, and 2) Zero patching, whilst significantly OOD, is often more effective than patching corrupted activations. The presentation of the paper is very good, with a coherent narrative for the experimental investigations and clear figures supporting all claims. Additionally, the supplementary materials provide further interesting discussion and results, which given the exploratory nature of circuit discovery is highly valuable. Lastly, the release of the accompanying ACDC algorithm for use by the community is a significant contribution in of itself, as demonstrated by the fact that other members of the MechInt community have already applied ACDC for their own research. Weaknesses: No major weaknesses were identified. There are a fair number of minor phrasing issues outlined in the following nitpicks section. In the related work section, explicitly stating how path patching varies procedurally from ACDC might be worthwhile. Additionally, a discussion of how ACDC varies from Causal Scrubbing in its patching methodology may be useful. ## Nitpicks * 3: makes it **too** costly" * 27: circuits **as** subgraphs * 32: "with which to extract" or "for extracting" * 32: remove "that automates part of it" * 85: The choice of phrasing - "clearly defined behaviour" makes this sentence almost tautological. Perhaps an explicit mention of simplicity would be suitable here. Researchers unfamiliar with MechInt may consider e.g. "Writing python code" clearly defined or "writing python docstrings" too broadly defined. * 91: "**Tasks** 1 and 3" inside the parentheses * 94: from **each task, which** researchers * 98: **as** a computational graph * 101: "on the level of detail of _their_ explanations of model behaviour* subject unclear and wording confusing * 125: such as -> for example (as subject of "such as" could be "tasks"). "predict correct gender predictions" remove last predictions? * 223: It seems the discussion of zero ablations takes place in Section 5 and Appendices F.2, rather than Appendix D. * 233: "we explain how compare to" -> "we compare to" * 236: "experiments use the same modifications to SP and HISP *as*" * 308 "is known*, and through comparison with previous*..." * 310: not clear what "outline of a subgraph" means vs. just "a subgraph" * 318: "work; a novel contribution" or "work - a novel contribution". * 320: "*within* the community" Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: The following questions are not crucial to the narrative of the paper, nor potential critiques of completeness. 1. Have you thought of automated ways to trade-off circuit faithfulness versus sparsity, without re-running ACDC with different values of $\tau$? 2. In general, how sensitive is ACDC to the choice of clean and corrupted datapoints. E.g. for the IOI task does ACDC provide considerably different circuits if very few examples are provided, vs. many? What about paraphrasing or noise injection as in ROME (this is probably very task dependent)? 3. When modifying Subnetwork Pruning you discuss interpolating the mask values - should we expect that linearly interpolating between a clean and corrupted activation is principled (i.e. does not potentially shift the representation to something meaningful but distinct)? 4. Are all values of SP masks 0, or 1 by the end of SP training? If not, then for the sake of counting subgraph edges, what is considered an "unmasked node"? 5. Given the unexpected utility of using zero ablations, would the authors suggest trying this whenever utilizing ACDC? 6. Could the authors expand on how the _locally significant changes_ alternative to detecting salient parts of the subgraph would avoid potential sensitivity to the form of patching? Presumably, some perturbation would be required to measure "effects" and it is not immediately clear what this perturbation is, if not an activation patch. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: Limitations are addressed, or explored through supplemental experiments. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their kind words about our work. In particular, we are happy to see that they are also excited about open-sourcing ACDC and that they recognize how it has already been used by the community to accelerate mechanistic interpretability research. We were also pleased to see that you (reviewer Zemb) were complimentary of our empirical evaluations on KL divergence and zero patching. We hope our replies to your questions are comprehensive. We have added the nitpicks to our working copy and thank you for documenting them. Regarding the call for more description of how ACDC differs from causal scrubbing we will add the following paragraph to an appendix, but in summary: we think that **ACDC and Causal Scrubbing are complementary tools—Causal Scrubbing is a tool to test hypotheses, while ACDC focuses on the generation of good hypothesis** Addition to appendices: “Both Causal Scrubbing (Chan et al. 2022) and ACDC make extensive use of path patching: Both test how well a circuit reproduces a model's performance by resample-ablating all edges other than the ones specified by the circuit. This path-patching test is a special case of Causal Scrubbing. The step automated by ACDC is the generation of good hypotheses, it allows us to find the smallest circuit hypotheses that reproduce certain levels of model performance in an efficient and automatic manner. Causal Scrubbing itself is not limited to testing just model subgraphs (as we do here) but in principle also allows testing which part of the inputs are relevant for which parts of the circuit, while ACDC automates only the circuit finding on a subgraph level.” Now we will respond to your questions. > Have you thought of automated ways to trade-off circuit faithfulness versus sparsity, without re-running ACDC with different values of $\tau$? This is an interesting point. Like many ML algorithms, the performance of ACDC is somewhat sensitive to the value of tau. In general, we think that it is not too much of a burden to do a sweep over tau values—especially compared to the previous state-of-the-art of manually searching for circuits by hand. In developing our work, we found that the number of connections pruned so far is highly predictive of how many connections will remain at the end of an ACDC run. We provide evidence for this claim in Appendix I, and will add a paragraph to this Appendix in response to the reviewers' interests. > In general, how sensitive is ACDC to the choice of clean and corrupted datapoints … if very few examples are provided, vs. many? The datasets used for the behaviors we investigate are very small. For example, we 40 dataset examples are used for the induction task and 100 for IOI and Greater Than. Because we are able to get good results on these small datasets, we do not expect the choice of data points to significantly affect ACDC’s output. In the attached PDF (Figure 4) we evaluated the KL Divergence on held-out test examples, but we have not extensively evaluated this. We did not compare noisy corruptions because we in general we want to test the ability to isolate specific behaviors (e.g the IOI and Greater Than circuits carefully choose patching distributions so that they can isolate a specific behavior present in one distribution, but not in the other). We included zero ablation comparisons as this intervention does not require any parameter choice (but choosing a norm term for noise interventions is an additional choice). > Are all values of SP masks 0, or 1 by the end of SP training? If not, then for the sake of counting subgraph edges, what is considered an "unmasked node"? We round the outputs of SP to 0 or 1 and then count edges with the rounded graph. When modifying Subnetwork Pruning you discuss interpolating the mask values - should we expect that linearly interpolating between a clean and corrupted activation is principled (i.e. does not potentially shift the representation to something meaningful but distinct)? We use linear interpolation as a continuous approximation through training that uses resampled activations (as in Causal Scrubbing). The generally good performance of SP makes us confident that our continuous approximation that we then clamp to 0 or 1 is a valid use of the SP approach. Nevertheless, we take your point that this could potentially shift the representation somewhere else that is in distribution but distinct (i.e. meaningful but distinct). We think further developments on our approximation is an interesting avenue for future work (adapting gradient based methods to work with corrupted activations). > Given the unexpected utility of using zero ablations, would the authors suggest trying this whenever utilizing ACDC? We apologize to the reviewer for the data in our incorrect appendix figure (Figure 15, see our global response and the attached PDF) that may have led to this conclusion. We attach in the PDF the correct performance of ACDC, SP and HISP with zero activations on the IOI task in Figure 15, and find that zero ablation performs worse for this task (like Docstring and Greater-Than). ACDC performance of zero ablations on the tracr tasks was unchanged (perfect) but we do not think that this should be extrapolated to realistic language models. > Could the authors expand on how the locally significant changes alternative to detecting salient parts of the subgraph would avoid potential sensitivity to the form of patching? We don't understand how the locally significant alternative “would avoid potential sensitivity to the form of patching”. In our paper, we discussed how the locally large effects alternative could resolve some issues with negative head recovery, although since it is not optimizing any metric globally it would be more difficult to interpret the performance of this alternative, unlike with e.g KL Divergence. --- Rebuttal Comment 1.1: Comment: Dear Authors, Thank you for your comprehensive response to my questions. * The provided summary of Causal Scrubbing vs ACDC is very clear, and appreciated * The discussion of the sensitivity with respect to the datapoint choices is convincing and I don't think further evaluation is necessary. The relative stability of the results is reassuring * As linear interpolation on SP does indeed yield good results, your confidence in its validity does seem well-placed, as it is difficult to imagine how undesirable "semantically meaningful" interpolations (which I had worried about) wouldn't manifest without also decreasing performance * It is good to know that zero ablations aren't as effective as I had mistakenly concluded! * This is clear now - it seems I had some initial confusion about "metric" vs. patching when reading this section, though It seems quite clear Given the scope of ACDC's application, I retain the current rating and recommend this paper for acceptance as a sound and meaningful contribution to the field of Mechansitic Interpretability.
Summary: The paper is a fresh take on mechanistic interpretability, focusing on the automation of the interpretability task, demonstrating it on attention based models. The method, ACDC, finds the pareto optimal subgraphs of the network, thus bringing down the number of connections to highlight the role each unit plays in the predictions of the model. The authors present extensive experiments and analysis on various tasks, including the interesting IOI and Greater-Than, with further insight on the performance trends in the supplementary work. On providing an idea of the workflow in the domain, they introduce their algorithm focusing on iteratively chipping away at the computational graph, creating a sparse graph while retaining the good performance on the specified task and metric. They do mention the limiting prowess of the algorithm, that is the handicap of not identifying all the abstract units and sensitivity to hyper-params, while also not being automated end to end for a complete interpretability framework setup. Overall the algorithm is a good starting point to build on automating scalable interpretability. Strengths: - Neat presentation, with well organized sections and appropriate background information. - Clean and crisp algorithm, that is easy to understand and yet highly effective in its job. - Experiments are well defined, tasks, metrics and objectives are comprehensively made clear. - Comprehensive ablations and detailed supplementary work to support the claims made and make sense of the findings, special kudos for the highly effective plots and task-wise analysis. - Frank and clear understanding of the shortcomings and strengths of the algorithm make this method a clear one to build on top of, prompting further interesting work in a highly important domain. P.S. A playful modification to the title could be to make it AC⚡DC to make it a play on the famous music band. Weaknesses: - Distinction from previous work is not very clear. Especially as compared to HISP, the top k heads are analogous to retaining only the influential heads in ACDC. While I agree that the algorithm in itself is distinct, the inspiration from previous works should be made clear. I would suggest adding a subsection to compare the circuits derived from the two methods for further comparison in the approaches (akin to Figure 6), and explicitly highlighting the specific fail cases for HISP that ACDC works for could be a great insight. - While section 5 is helpful in answering questions about the effectiveness of ACDC, Line 253 as pointed out by the authors raises questions. In my opinion more work in establishing KLD as a faithful proxy must be undertaken. - While Appendix K and H are helpful to establish the “correctness” of the subgraph recovered, unless one knows the true minimal graph, it is difficult to compare and decide which of the methods is “more correct”. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - See weaknesses. - While Figure 16 (Typo in the Greater Than graph with a $\mu$ perhaps?) and 17 differ significantly on the performance on the reset network and the trained network, for the SP and HISP networks, the ACDC points are invisible in Figure 16 subplots for Tracr (Reverse), Tracr (Proportion), Greater Than, I’m assuming the scale of the graph points to them being in this range, but perhaps consider replotting with the ACDC method points being visible. - Line 313: As the authors correctly point out, the issue of missing certain units must be investigated and probed further. Highlighting taskwise the missing sections and what they could correspond to could help draw patterns on the way ACDC works and alleviate weaknesses in further work to build on top of this. - Likewise the final hyperparameters used and ablations on the variance in the performance as one tweaks these hyperparams could be useful to note the sensitivity and brittleness of the algorithm. For a $\delta$ change in a hyperparam how does the method performance vary? I am open to increasing the score on further clarifying discussion with fellow reviewers and authors. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you very much for your well-considered review! We are very happy that you appreciate the work we put into exploring the shortcomings and advantages of each algorithm. Thank you also for the insightful questions, and please let us know if you have any more. **Comparison to HISP.** Thank you for bringing this up! HISP is a heuristic pruning algorithm. ACDC can be seen as a pruning algorithm too. However, its goal is very different from most of the pruning literature, which we mention in related work. For example, ACDC operates on connections between heads (edges) instead of heads (nodes), which sometimes makes the forward pass run more slowly and require more memory, a very undesirable thing in NN pruning. However, as an advantage ACDC can recover the path-specific effects of edges – in Figure 6, as you mention, ACDC recovers the effect that Layer 0 heads have on the output indirectly: through their input connections to Layer 1. Both other algorithms we compared to cannot be this specific. However, further emphasizing the conceptual similarities and differences is worthwhile and we will do this in Section 3 and Appendix H. **Faithfulness of KL divergence.** As far as we understand, your concern about the faithfulness of KL in Section 4 of our work asks two questions: - (1) Is KL a good proxy for the task-specific metric when finding circuits? - (2) Does optimizing KL divergence reliably yield the circuit that’s implementing the behavior in the NN? For example, optimizing KL has to yield a circuit that contains both ‘positive’ and ‘negative’ contributions to the log-likelihood. We do not assume either of these things in the paper, and we do examine them empirically. We believe we have pretty firmly established (1) for the tasks we tried. Sadly we did not have space to include this in the response PDF, but we have plots that measure *task-specific loss on held-out test data*, on experimental runs that use the *KL as a target*. The result looks pretty much like Figures 16-19 and the updated Figure 4: the held-out task-specific loss improves monotonically with higher numbers of edges. Modifying the $\tau,\lambda$,% parameter monotonically decreases loss and increases number of edges. Thus KL is a good proxy for task-specific loss, and is not overfitting the subgraph. We have not established (2) very well, but it was a major focus of the experiments of this paper. We are bottlenecked by the lack of good measurements of hypothesis correctness in the field of interpretability, which is exactly your third objection: mostly we don’t know the true minimal graph! We attempted to establish (2) with the ROC plots (e.g. the updated Fig. 3) and the reset network experiments (Figs. 16-19). The ROCs depend on the correctness of previous work, and the other plots are weak evidence, so we still don’t know whether (2) is true. We will highlight this problem further when discussing the metrics at the beginning of Section 4. That said, now that there is a clear way to automate mechanistic interpretability, we expect ACDC to be replaced by faster and better algorithms soon. KL divergence may turn out to be a mediocre target for finding circuits! We will also emphasize this point in the conclusion. **We don’t know the true minimal graph**. Correct, sadly we don’t, and it’s a problem for your previous objection as well. Measuring the correctness of a hypothesis is an open problem in the field. There are some attempts in the literature (Geiger et al. 2021, Chan et al. 2022), but their soundness is only supported by theoretical arguments. Little experimentation targeted to establishing that these approaches reliably tell good from bad hypotheses has been done, instead they have just been applied. **Sensitivity of the hyperparameters.** Very good point. The best way to communicate non-linear sensitivity is via plots that show the variation in performance for each value of the ACDC ($\tau$), SP ($\lambda$) and HISP (% heads) parameters. We did so by color-coding points in Figs. 3 and 4 of the updated PDF. The number of recovered edges and the task-specific loss vary smoothly with these parameters, with little noise. Thus, this does not seem to be a problem. The main robustness challenge of these algorithms is that the dataset, metric and type of activation patching, can greatly change the recovered circuit. **Typos in graph**. The $\mu$ is not a typo, it indicates the SI prefix ‘micro’ ($\cdot 10^{-6}$). The values displayed in that plot are extremely small, practically zero. We agree this is confusing and apologize, we will amend the paper to explain this. The ACDC points are not out of range, but our data collection script had a bug and missed this particular run. We have fixed this oversight already in our copy of the paper! **Highlighting the units that the circuit misses**. The IOI negative heads are the only pattern that we noticed. After a more thorough look, we found that they are recovered by ACDC with the KL divergence as a target, when the threshold is $\tau \le 0.00398$. This is towards the mid-range of the thresholds we used in the final sweeps in the updated Figure 3, which is from $10^{-5}$ to $10^0$ (the figure goes down to $10^{-9}$, but that is only for the tracr tasks). Unsurprisingly, ACDC does not manage to recover the negative heads when using the IOI metric (logit difference) as a target. This is a reason to prefer the KL over the task-specific metric. --- Rebuttal 2: Comment: Dear 6BEj, we hope our responses were clear. Would you like to ask any further questions?
Rebuttal 1: Rebuttal: We thank all the reviewers for their detailed and positive feedback on our paper. *All* reviewers recommended an acceptance and together found our contribution “well-motivated” (pJGB) and a “fresh take on mechanistic interpretability” (6BEj) that is an “important clarity/systemization to the workflow of these otherwise individual findings” (aEyk) and automates “a highly labor-intensive part of the circuit discovery process” (Zemb). We’ve attached a PDF with updated figures in this global response, and in our individual responses to reviewers we hope that we have answered specific questions. Unfortunately, we found that our code for the ROC figure was buggy in a subtle way. Our conclusions from the main text remain the same (e.g ACDC has the greatest performance by AUC), though the updated figure in the PDF does improve the performance of the existing algorithms that we repurposed for mechanistic interpretability. One minor point in the appendix was affected, which we discuss with reviewer Zemb. Our PDF includes 1. An updated ROC plot (Figure 3) with fixed bugs from the earlier figure. 2. An update to the Pareto frontier plot (Figure 4) for clarity (there were no problems with the data collection for this plot) as pointed out by reviewer pJGB. 3. An update to the appendix Figure 15, which shows that zero ablation does not work as well in that case. 4. A figure with experimental results on automated circuit discovery of OR gate mechanisms, to respond to reviewer pJGB. We thank the reviewer for showing interest in this issue with circuit discovery, and we will add this example as an appendix to our paper. We look forward to engaging with the reviewers further on their questions. Thank you all for your work! Pdf: /pdf/13a167a9f5397417e82af78627ea9c0592d8cf4e.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Tame a Wild Camera: In-the-Wild Monocular Camera Calibration
Accept (poster)
Summary: The submission #3482, entitled "Tame a Wild Camera: In-the-Wild Monocular Camera Calibration" presents a novel self-calibration strategy where the regression of the incidence field of the camera is predicted via a deep neural network before using a RANSAC to filter outliers and regress the intrinsic parameters of the camera. Strengths: - Without being impeccable, the paper conveys the main idea behind the proposed technique. - Combining the incidence field with more traditional methods to regress the camera intrinsic is "relatively" novel. - The approach is supposed to work even when the images are cropped Weaknesses: - The literature is quite incomplete - The camera distortion is not considered - The approach is not really novel, as incident field regression has already been done in the past Technical Quality: 3 good Clarity: 2 fair Questions for Authors: - 1. One of my main grips on this paper is the claim for novelty. While the reviewer should admit that using the monocular normal and the depth prediction to estimate the camera's intrinsic is new. The "incidence field" regression is not novel as it has already been described in "Deep Single Image Camera Calibration with Radial Distortion, CVPR, 2019". In this paper, the authors obtained mixed results using such a strategy. However, one of the significant advantages is to be able to deal with optical aberration, which can be challenging to model parametrically. Unfortunately, this manuscript does not integrate camera distortion, which would be meaningful in this context. I believe that introducing this non-parametric approach, is not fully justified, maybe by training [24] on the exact same cropped data would lead to similar results? (I have not seen its detail in the experiment part). - 2. In this regard, it would also be beneficial to integrate "DeepCalib: A Deep Learning Approach for Automatic Intrinsic Calibration of Wide Field-of-View Cameras, CVMP 2018" in the literature as it deals with distortion and also explicitly tested on cropped images (maybe in supplementary). In their case, the estimation on cropped images fails because they have yet to train on these particular cases. So it may support your narrative. Another missing paper in the literature is the recent T-PAMI "A Perceptual Measure for Deep Single Image Camera and Lens Calibration" which might contain some interest in recent developments on deep learning-based calibration of cameras from a single image. - 3. Would it be possible to also test on non-cropped images too? I feel it is unfair to test with many cropped images as, in reality, most images in the wild are uncropped - 4. Since the approach has been trained on perspective images only, what would happen in case of a large field of view camera with a field of view outside of the training set distribution? In this regard it would be good to have some statistics regarding the distribution of the field on view used for the training. - 5. To really work in the wild, integrating radial distortion seems necessary, how would you integrate it in your framework for training and inference? Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: I have already expressed all my concerns regarding this work in the previous sections of this review. For all the reasons above mentioned, I would like to issue a rather mixed opinion regarding the acceptance of this work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Response to R5 ioKN** We sincerely thank R5 for the detailed comments. We deeply appreciate your feedback and aim to address your concerns thoroughly. Before responding to the concerns, we believe that there could be a possibility of misunderstanding in our approach: Different from [24, 26, 30, 31], [Deep Single], and [DeepCalib], our approach does not need perspective images in training. As in the main paper Tab. $1$, our approach avoids camera perspective calibration for intrinsic estimation. Except for Tab. $3$, all experiments use non-perspective images in training. We next delve into concerns in detail. **R5, Q1 Novelty of Incidence Field.** Thanks for bringing up the related work [Deep Single]! There exist several differences. We summarize in Tab. D. Table D: Comparison of methods Method | Intrinsic DoF | Persp. Images Reqd | Network Outputs | -------|----| ------------ | ----| Deep Single | 1 | Yes | $4 \times 1$ Vector| Ours | 4 | No | $H \times W \times 3$ Incidence Field| We elaborate on the differences further: 1. On Intrinsic Parameterization: [Deep Single] does not parameterize intrinsic as the incidence field. Instead, they parameterize the intrinsic as $1$ DoF camera vertical FoV (see [Deep Single] Fig. $2$). 2. On Network Inference: [Deep Single] regresses $1 \times 4$ vector. We regress the $H \times W \times 3$ pixel-wise incidence field. Next, we use a RANSAC algorithm to retrieve the $4$ DoF intrinsic. 3. On Training Data: [Deep Single] trains with perspective images. Our method trains with perspective and non-perspective images. 4. Potential Confusion caused by [Deep Single] Bearing Loss: The confusion possibly arises at the Bearing Loss in [Deep Single], depicted in Fig. 5. Our incidence field learning loss Eqn. 13 shares some similarities with the Bearing Loss, with the distinction that the latter requires camera perspective angles in computation. We acknowledge the similarity. However, incidence field learning itself takes a brief paragraph in our work. Our novelty is more on parametrizing intrinsics as incidence field, and the accompanied RANSAC algorithm. The similarity does not impact our novelty. 5. Train [24] on cropped images. [24] parametrizes intrinsic as $1$ DoF vertical FoV. This limits [24] training on cropped images since cropped images have $4$ DoF camera intrinsic. **R5 Q2&5: Integration of Image Undistortion.** We appreciate this exciting suggestion of discussing image undistortion works. We will update our reference to include the suggested three pieces of literature. We omit image undistortion within the scope of our paper based on three reasons: 1. Based on our observation, most public datasets and images on the internet are undistorted. Thus, most monocular camera calibration works, including all compared baselines ([25][30][24][31][26]), assume undistorted images as input. 2. Excellent solutions in learning-based image undistortion already exist. e.g., [Blind Geometric Distortion Correction on Images Through Deep Learning, CVPR, 2019]. 3. In our setting, the image undistortion becomes an open-classification problem, where whether an image is distorted needs to be verified first. However, we consider this a non-trivial problem requiring another elaborate algorithm to solve and is currently beyond the scope of this paper. **Integrating Image Undistortion** We do agree that integrating the image undistortion enhances the generalizability of our model. Hence, we discuss the following potential solutions for a distorted image: 1. Synthesize image distortion in the training dataset. 2. Estimate image distortion as a coordinate remapping similar to [Blind], together with the incidence field. This suggests estimating a pixel-wise vector: $[v_1, v_2, v_3, \Delta x, \Delta y]$, where $[v_1, v_2, v_3]$ is the incidence field, and $[ \Delta x, \Delta y]$ is the distortion flow following [Blind]. 3. Recover image distortion using Hough Transform following [Blind]. 4. Recover the camera intrinsic using the updated coordinates with distortion corrected. **R5, Q3 Test on uncropped images.** We test primarily on uncropped images: - **All** test images in Tab. $3$ are uncropped. - **Lower** half of Tab. $2$ tests real uncropped images from datasets such as MegaDepth, Waymo, RGBD, ScanNet, MVS, and SceneNet. **R5, Q4 Statistics and performance of camera FoV.** **FoV Statistics** We report our training/testing data FoV statistics Fig.$3$ of the Supp material. The figure shows that our training data primarily includes $30-80$ degrees horizontally and $15-60$ degrees vertically. Further, our training and testing data does contain some particularly large (over $60$ degrees) or small FoV (below $20$ degrees) images. Most of our training data is not perspective images. **Performance Statistics** We thank you for suggesting this insightful analysis! Therefore, we recompute Tab.$2$ results by varying the horizontal FoV ranges to obtain Tab. E. We report the intrinsics performance without applying assumptions and by averaging over all datasets in each range. Table E: Intrinsics estimation results with FoV variation. | FoV | 10 - 20 | 20 - 30 | 30 - 40 | 40 - 50 | 50 - 60 | 60 - 70 | 70 - 80 | 80 - 90 | All-Range | |:-----------:|:-------:|:-------:|:-------:|:-------:|:-------:|:-------:|:-------:|:-------:|:------:| | Number | 163 | 726 | 1927 | 2747 | 2554 | 404 | 151 | 21 | 8708 | | Percentile | 1.9% | 8.3% | 22.1% | 31.5% | 29.3% | 4.6% | 1.7% | 0.2% | 100.0% | | $e_f$ | 0.173 | 0.119 | 0.113 | 0.120 | 0.128 | 0.109 | 0.120 | 0.112 | 0.122 | Tab. E suggests that our algorithm maintains robustness across various FoVs, with the exception of exceptionally small ones. This outcome is anticipated, as small FoVs inherently offer a highly limited camera view. --- Rebuttal Comment 1.1: Comment: Thank you for your clear rebuttal. I apologize for any misconceptions in my original review. Your proposed approach seems indeed to differ significantly from previous works. Although I agree with R2 that this paper would be better suited for a computer vision conference, I would like to maintain my initial positive rating. --- Reply to Comment 1.1.1: Comment: We are grateful for your recognition of the novelty in our work. Your positive comments mean a lot to us. Thank you once again for dedicating your time and effort to reviewing our paper!
Summary: This paper proposes a method for single-image camera calibration using a 3D prior the authors refer to as an incidence field. The incidence field is the collection of rays originating from some 3D point towards the camera origin, that are incidental to the image plane. The authors describe how the incidence field can be used to recover the camera intrinsics. Next, a method is proposed for recovering the intrinsics by first estimating an incidence field from an image (using a recently introduced neural network architecture, NewCRFs) and then applying RANSAC. An extensive evaluation shows that the proposed method achieves superior performance to recent methods on numerous benchmarks. Additionally, several example applications are demonstrated. Strengths: - Addresses an important and interesting problem. Single image camera calibration "in the wild" has received a lot of attention lately and is a fundamental vision task. Knowledge of camera intrinsics is essential for numerous applications. - The proposed approach has technical novelty. In particular, shows how the concept of an incidence field (backprojected rays in 3D space) are related to camera intrinsics, then introduces a method to estimate them (method of [62] plus RANSAC). As far as I know, this has not been done before. Often in this space, a novel take on how to represent something leads to downstream performance benefits. - The method is extremely simple, which I consider a positive. - The evaluation is extensive, achieving superior results to recent methods on several benchmarks. - Numerous interesting applications presented, such as detecting image resizing and cropping. - Code provided and will be released with data and models. Weaknesses: - The evaluation should have descriptions of the metrics and how they are calculated (found in supplemental). - The quality of the writing is poor in certain aspects. For example, L14 in the abstract: "With the estimated incidence field, a robust RANSAC algorithm recovers intrinsic." Ultimately, the manuscript just needs a solid editing pass top-to-bottom. - The quality of the figures, though parseable, could be improved from an aesthetic sense. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: My initial rating is a weak accept. The proposed approach is simple, explained well, has novelty, and the results are compelling. I think this will be of interest to the community. Suggestions: - Full editing pass to address minor language issues. Improve aesthetics of figures (e.g., Figure 2, left). - Implementation details should be in the main document instead of supplemental. - L233 should reference Table 3. Text should introduce the dataset. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: Limitations minorly touched on in the manuscript. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Response to R4 Ujoz** We appreciate the reviewer's suggestions to make our paper accessible to a wider audience. We extend special gratitude to the reviewer for recognizing the technical novelty of our approach and its potential contribution to the community. We will adhere to the reviewer's suggestions in the revision. **R4, Q1 Missing metrics in tables.** Thanks for the suggestion! We will move the evaluation metrics and implementation details from the supplementary to the main paper. **R4, Q2 Improve the quality of writing.** Although we have partially revised our manuscript based on the reviewer's feedback, we will do a solid editing pass over the paper. We include a list of revisions made so far. We will continue improving the manuscript. - L$14$: ''With the estimated incidence field, a robust RANSAC algorithm recovers intrinsic.'' -> "We apply a robust RANSAC algorithm to recover intrinsic from the estimated incidence field." - Fig.$2$c, "w/ Assum." to "w/o Assum.", and change the other one accordingly. - Tab.$3$ "erceptual" to "perceptual" - L$233$, Tab.$4$ reference to Tab.$3$ reference. - Additional description of Tab. $5$ and Tab. $6$ baselines. **R4, Q3 Quality of Fig. 2.** We re-plotted Fig. $2$, improving its layout and removing the ``L" shape illustration boxes. We do not place this in the one-page rebuttal pdf because of space constraints. **R4, Q4 L233 should reference Table 3. Text should introduce the dataset.** We fixed these issues in our manuscript. --- Rebuttal Comment 1.1: Comment: Thank you for your response. I have read the other reviews and author rebuttals. The majority of reviewers are in agreement and I will be retaining my initial rating.
Summary: This paper introduces incidence field, a per-pixel representation for single image camera intrinsics calibration. The incidence field is defined as a 2D vector, pointing to the principal point of the image, normalized by the focal length. It is invariant to cropping and resizing. The paper suggests using a neural network to regress the incidence field given an input image. Finally, RANSAC is used to get the 4DoF intrinsics (fx, fy, cx, cy). Strengths: The paper connects the incidence fields with surface normals and depthmaps. Thus, making it a reasonable representation that can be predicted by a network such as NewCRF, which is originally designed for depth estimation. The incidence field formulation is simple yet powerful. It has desirable properties such as invariant to arbitrary cropping and resizing, as shown in S3.3; and can be derived from surface normals and depth, as shown in S3.2. Incidence fields are independent of camera extrinsic, contrasting Perspective Fields [26], which requires camera roll and pitch for formulation. This feature enables incidence fields to be trained on extensive data where the camera is calibrated but the pose is unknown. The paper demonstrates SOTA performance on various datasets, both qualitatively and quantitatively. Weaknesses: - The paper claims in its contribution that “Our method meaks no assumption for the to-be-calibrated image”, while in fact, it presumes no distortion in the image (L118). - Table 1 needs revision, as it inaccurately describes the baselines and the comparisons are not apple-to-apple. Contrary to the claims made, [24, 26] do not assume Manhattan data during training and can be trained on arbitrary scenes, such as natural scenes. Furthermore, it is unfair to state that your method requires no assumption on the training data when comparing it with others [24, 31, 26], since they also predict camera roll and pitch while your method does not. - There are missing details in Table 2. It is unclear what the unit for the error metric is, and which dataset [26] is trained on. If [26] is solely trained on GSV data, then it implies a zero-shot scenario for [26] on all the listed datasets in Table 2. If camera extrinsics are provided, you should be able to train [26] on the same training set as your method, ensuring a fair comparison. - Including a baseline that predicts intrinsics from depth and surface normal predictions (as described in Sec 3.2) would be beneficial. This would empirically validate the derivation and give an idea of the method’s accuracy in Sec 3.2. Furthermore, it would support the argument in L150 that “Minimal solver in Eq. (9) can lead to a poor solution”, therefore further justifying the contribution of the method in Sec 3.4. - Typo in L218 and L233: There is no [26] in Tab 4. It should be Tab 3 instead. - In Table 5 and Sec 4.4, the baseline method is not clarified. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: See weakness. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 4 excellent Contribution: 4 excellent Limitations: In real application, whether to apply the assumption still waits for human input. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Response to R3 PGSe** We sincerely thank for the exceptional comments and for giving our paper an "Accept." We are particularly grateful for recognizing the validity of parametrizing intrinsic as an incidence field. We next delve into your specific feedback. **R3, Q1 Undistorted Image Assumption.** Great Catch! We apologize for the imprecise argument. We assume an undistorted image and will clarify our claim and update the abstract, introduction, related works, and Table $1$ in the revision. However, we humbly suggest that the undistorted image assumption is commonly upheld in all compared baselines ([25][30][24][31][26]), particularly in line with the Manhattan World assumption [31, 32, 47, 63], where detecting image lines inherently relies on an undistorted image. **R3 Q2&3: Fair Comparison in Tabs. $1$ and $2$.** There is a misunderstanding in baselines. Baseline [26] and other baselines [24, 30, 31] can **NOT** be trained using ground truth camera extrinsics. Instead, they assume a gravity-aligned camera during training. The gravity direction is unknown in most public datasets, such as ScanNet. In other words, the known extrinsic only establishes the camera pose relative to the first camera. In contrast, the orientation of the first camera relative to gravity remains unknown. Now, we address the concerns comprehensively: 1. **Manhattan data assumption in training.** Thank you for bringing this to our attention! The Manhattan World assumption eventually gives the camera's roll, pitch, and focal length. We illustrate the process briefly below: Manhattan World assumption ↳ Parallaized Line Sets, ↳ Two (or three) Vanishing Points, ↳ Camera's roll, pitch, and focal length. Recent learning-based methods [24][31][26] directly acquire the groundtruth camera's roll, pitch, and focal length from panorama images. We term this as the 'Manhattan data assumption,' which ultimately obtains groundtruth derived from the Manhattan World assumption. It does not imply the presence of horizontal and vertical lines in the images. We will further clarify this point with an additional description in our Tab. $1$ and Sec. $2$. 2. **Unfair comparison to [24, 31, 26] in Tab. $3$.** Great point! We view Tab. 3 as a fair comparison since our reformulation eliminates the need for perspective calibration. Following the Manhattan World illustration above, traditional baseline methods [31, 32, 47, 63] require simultaneous perspective calibration and intrinsic calibration. This also applies to the SoTA learning-based methods. For instance, [26] necessitates line detection before camera intrinsic estimation, while [31] estimates camera intrinsic using an inferred camera perspective field. In short, perspective calibration is indispensable for camera calibration in baselines. Next, we extend our method and conduct experiments to jointly apply perspective and intrinsic calibration to address your concern in Tab. $B$ of the one-page rebuttal pdf. We modify our network to jointly regress the incidence and perspective fields (defined in [26]). Interestingly, with the incidence field estimated, we can directly estimate the camera roll and pitch from the perspective field. The incidence vector $\mathbf{v}$ and perspective up vector $\mathbf{u}\_x$ (Eqn. $1$ of [26]) determine a 3D plane where the gravity vector (cross-focal point) resides. The intersection of two such planes in 3D space gives the gravity direction. We solve gravity using this as a minimal solver with a RANSAC algorithm. Finally, we determine the horizon line by averaging over $\varphi_{\mathbf{x}}$ (Eqn. $2$). Tab. B shows that our method also outperforms the SoTA methods in perspective calibration. 3. **Unfair comparison to [26] in Tab. $2$.** We now compare with the most recent checkpoint (released after the NeurIPS submission deadline) in Tab. $A$ in the one-page PDF. The newer released checkpoint [26] was trained on bigger 360Cities and EDINA datasets. However, we cannot train our model on 360Cities since the dataset is not public. From Tab. $A$, our method maintains SoTA performance with a solid margin over [26]. **R3, Q4 Sec. 3.2 baseline.** Thank you for your valuable input! In the one-page rebuttal PDF Tab. $A$, we incorporate a baseline with Sec. 3.2 minimal solver. We utilize ZoeDepth and [Estimating and Exploiting the Aleatoric Uncertainty in Surface Normal Estimation, ICCV, 2021] to initialize monocular depthmap and surface normal. These experiments further justify our idea of parametrizing intrinsic as the incidence field. **R3, Q5 Typos.** We appreciate for pointing out the typos! We have corrected them in the revised version. **R3, Q6 Baselines in Tab. $5$ and Sec. $4.4$.** Apologies for the oversight. The baselines in Tab. $5$ adopt an identical network structure, with only the last layer modified to a fully connected layer after average pooling for direct regression of the $4$ degrees of freedom (DoF) intrinsic. Additionally, all these models normalize the intrinsics by resizing the image height and width to [0, 1], which aids model convergence. We compare the baseline and our method’s estimated intrinsics using the same evaluation protocol. The baselines in Tab. $6$ of Sec. $4.4$ follow LoFTR in estimating image correspondence. With the estimated correspondences, they apply an OpenCV-based five-point algorithm with RANSAC to estimate the two-view camera pose. --- Rebuttal Comment 1.1: Comment: Hello, thanks for the great effort in the rebuttal. The new results in the pdf resolve my concern. Thanks for the updated experiments to ensure a more fair comparison. Here are some nitpicking comments in response to the rebuttal: - In my opinion, instead of using "Manhattan world assumption" to describe the requirements for some baselines, "gravity-aligned panoramas" is a more accurate term. I still can't entirely agree that previous baselines require a "Manhattan World assumption". Manhattan World assumption is that "all surfaces in the world are aligned with three dominant directions, typically corresponding to the X, Y, and Z axes;" (Furukawa, Yasutaka, et al. "Manhattan-world stereo." CVPR 2009.) Getting roll, pitch, and focal length from panorama images [24, 26] only requires the panorama to align with gravity, which is aligning only one axis. - "For instance, [26] necessitates line detection before camera intrinsic estimation, while [31] estimates camera intrinsic using an inferred camera perspective field." In this sentence, [26], [31] should be swapped. --- Reply to Comment 1.1.1: Comment: We greatly appreciate your rectification of the reference order for [26] and [31] in the rebuttal response. Regarding our discussion on the "Manhattan world assumption", your suggestion of substituting it with "Gravity-Aligned Panorama Images" is excellent! We plan to apply the following changes: - We will substitute the main paper Tab.$1$ from "Manhattan-Train" to "GravityAlignedPanorama-Train". - We will clarify the relationship between the "GravityAlignedPanorama-Train" and "Manhattan World assumption". Both assumptions yield the camera roll, pitch, and focal length. However, the former accommodates natural scenes where surfaces are not necessarily aligned with principal axes. We believe there's a specific point that may need further discussion: - "Manhattan World Assumption" suggests the imaging content (surfaces) is aligned to X, Y, and Z axes. - "Gravity-Aligned Panorama Images" suggests the imaging plane is aligned to the gravity (X axes). We think the two are not directly comparable as one is about imaging content and the other is about imaging plane. But we do agree "Gravity-Aligned Panorama Images" relaxes the "Manhattan World Assumption" as accommodating non-aligned surfaces. "Gravity-Aligned Panorama Images" is a more accurate description for the baselines [24, 26, 31].
Summary: An in-the-wild monocular camera calibration method is proposed. It allows to estimate the focal lengths $f_x$ and $f_y$ as well as the optical center $b_x$, $b_y$ without any additional information such as a checkerboard or the Manhattan world assumption. The proposed method consists in employing a neural network to predict the incidence field, followed by a RANSAC. Several experiments are performed to demonstrate the ability of the proposed method to perform in-the-wild calibration. Strengths: 1. The proposed approach is simple and efficient. 2. Training a network to predict the incidence field seems straightforward (the authors took a depth prediction network architecture and simply changed the last layer and the loss). Weaknesses: 1. The idea of using a sota depth prediction network to predict the incidence field (section 3.4) seems new, but this is a rather weak contribution. If I am not mistaken, it essentially consists in taking the network from a github page, change the last layer and the cost function and retrain the network on the same data. This contribution seems weak for a conference like NeurIPS. 2. The idea of using the incidence field to estimate the calibration parameters (section 3.5) seems new, but the derivation is trivial: here the minimal solver is the solution of a trivial linear system (it takes a few minutes to derive, compared to Grubner-basis-based minimal solvers that we encounter in essential matrix estimation). In terms of contribution, I believe this is not strong enough for NeurIPS. Btw, I believe eq.17 and 18 do not correspond to a RANSAC since you simply quantize the focal length space and test each value. 3. I believe section 3.2 is not a contribution, is it ? It is said that this calibration method will not produce good results but I could not find any experiment using it. I suggest to include this experiment. Technical Quality: 3 good Clarity: 3 good Questions for Authors: After carefully reading the paper, I recommend to reject it. I believe the contributions are not sufficient for a conference like NeurIPS and not interesting for the broader NeurIPS community. I suggest to submit the paper to another conference like 3DV or CVPR. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: . Flag For Ethics Review: ['No ethics review needed.'] Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Response to R2 wjnN** We thank the reviewer for the positive comments on the soundness and presentation of the paper. We greatly appreciate the valuable, constructive criticism. Your concern helps us to further address the missing connection between Sec. $3.2$ and other sections in the manuscript. We address the concerns in detail. **R2, Q1: Using the SoTA depth network to predict the incidence field is a weak contribution.** We do not claim any contribution in learning the incidence field. Therefore, we briefly discuss learning the incidence field in a short paragraph (Sec.$3.4$) of our methodology section. **R2, Q2 - Part1: Monocular Camera Calibration with incidence field provides insufficient contributions to the community.** We restate our contributions in monocular camera calibration: 1. **Assumption-free method for undistorted in-the-wild images.** Monocular image-based 3D sensing is gaining rapid attention, and most of these sensing methods require known camera intrinsics. Therefore, an intrinsic estimation method for monocular images is the need of the hour. In Fig. A of the one-page rebuttal PDF, we enhance the monocular 3D object detection results of a recent CVPR 2023 work, Omni3D, by replacing their predefined intrinsics with estimated intrinsics from our method. 2. **First to infer the $4$ DoF intrinsics without extrinsics from monocular 3D prior.** Prior works always infer intrinsics alongside the camera extrinsics, either as camera pose ([25]) or perspective angles ([24, 26, 30, 31]). 3. **Simple and Neat solution with competitive performance**. Our method eliminates the need for extrinsic estimation in camera calibration, simplifying the algorithm and enhancing performance. **R2, Q3: Sec. $3.2$ is not a contribution.** We respectfully disagree. While the algorithm in Sec. 3.2 is not finally adopted and is not a contribution, the following insights we learned from this algorithm are contributions. - First, Sec. $3.2$ suggests the one can infer the intrinsics from monocular 3D prior. To the best of our knowledge, we are the first to show this. Prior works always infer intrinsics alongside camera extrinsics. Some use 3D-2D correspondence sets with a predefined 3D template [25] (Tab. $4$ baseline). Others jointly solve intrinsics with the camera roll and pitch angle [24, 26, 30, 31] (Tab. $3$ baselines). Sec. $3.2$ provides one solution which **avoids** extrinsic estimation by directly solving $4$ DoF intrinsic from monocular 3D prior. The disentanglement also simplifies the design of the minimal solver, representing a significant contribution of our work. - Second, Sec. $3.2$ suggests the incidence field is a 3D prior. Proving the incidence field as a 3D prior is crucial, as the incidence field is not a typical 3D prior. Monocular depthmap and surface normal are image texture dependent, which changes w.r.t. image content. In comparison, the incidence field is less texture dependent. For a given image with size $H \times W$, its incidence field is identical over different images. However, the incidence field still depends on image content. Given an image, after cropping and resizing, the incidence field changes accordingly, as the Fig. $3$ of the main paper. In Sec. $3.2$, Eqn. $4$ suggests the incidence vector is uniquely determined by the surface normal and monocular depthmap. The incidence vector $\mathbf{v}$ in Eqn. $4$ has $2$ DoF, uniquely determined by the $2$ constraints provided in Eqn. $4$. In summary, Sec. 3.2 suggests the incidence field is a monocular 3D prior, uniquely determined by the well-defined monocular 3D prior surface normal and depthmap. The fact implies that incidence field learning can generalize to different scenes in a similar way to surface normal and monocular depthmap. - Third, we make up the Sec. $3.2$ baseline performance in the Tab. A of the one-page rebuttal pdf. From Tab. A, the Sec. 3.2 minimal solver gives poor calibration results. This rationalizes the proposal of the incidence field for camera calibration. Thanks for suggesting this! **R2, Q2 - Part2: The proposed minimal solver is naive.** - First, following **R2, Q3**, using the RANSAC algorithm to recover the intrinsics is just a part of our contribution. - Second, Our incidence field-based solution is neat, yet a benefit. From **R2, Q3**, our incidence field-based solution is neat as it avoids extrinsic estimation. Further, our neat solution facilitates rapid implementation and strong performance. Together with our efforts in creating a dataset and defining evaluation metrics, our work serves as a benchmark for other in-the-wild monocular image calibration works. Please also see the comments from the **R4**, who acknowledges the benefit of a neat solution. Regarding the Eqn.$17$ and $18$, we will modify the subtitle in L$179$ to "Enumerate w/ Assumption". Thank you for pointing this out! --- Rebuttal Comment 1.1: Title: Response Comment: Hello, Thank you for your answers. As far as I am concerned, as I explained in my initial review, even if the paper is technically sound and the results are good, I strongly believe the contributions are not sufficient for a conference like NeurIPS. However, I can see that several reviewers are excited by this paper. Maybe I am missing something... As a consequence, I will keep my initial rating but I will not fight against the other reviewers if they decide to recommend to accept the paper. Best regards, wjnN
Rebuttal 1: Rebuttal: **To all Reviewers:** We value the reviewers' recognition of our method's novelty (R1, R3, R4, R5) and its strong performance (R1, R3, R4). We also thank reviewers R1 and R5 for recognizing the lucidity of our paper's explanations. We present a method for calibrating $4$ DoF intrinsic of in-the-wild monocular images. Unlike other works, our method can be trained over non-perspective images. Beyond calibration, we showcase compelling downstream applications, including the detection of image cropping and uncalibrated two-view pose estimation. **Convention:** We refer to main and supplementary figures/tables/references as numbers (Fig. 1, Tab. 1, [1]). In contrast, we refer to rebuttal figures/tables/references as alphabets (Fig. A, Tab. A, [A]). **One-Page PDF:** We've appended extra figures and tables to the one-page PDF. - Fig. $A$: We demonstrate an interesting downstream application by enhancing the in-the-wild monocular 3D object detection work [Omni3D, CVPR'23] using our inferred intrinsic parameters. - Tab. $A$: We additionally benchmark the baseline [26] after being trained with substantially more data. This checkpoint is released after the NeurIPS'23 submission deadline. Also, we provide the baseline results which employ the minimal solver outlined in Section $3.2$. - Tab. $B$: We provide interesting joint perspective-intrinsic calibration results. The method is elaborated in response to R3, Q $2\& 3$, point $2$. Pdf: /pdf/c4b30524aab81547d8b240b8d46876b3938af0c2.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: The authors propose a novel learning based approach for 4 DOF camera intrinsics calibration from a single image in the wild. They propose to use a neural network to predict the incidence rays for each pixel in an image from which the camera intrinsics can be recovered. The authors motivate the prediction of the incidence rays as an alternative for predicting the depth and normals, as the computation of the intrinsics from these signals involves derivatives it is very succesible to noise. The evaluation of the authors shows great potential of this method to generalize to images in the wild and show potential applications like uncalibrated 2-view pose estimation and image transformation detection. Strengths: The idea of leveraging the relationship betwen the scene depth and normals and the incidence rays is very interesting. The explanations are great and the neccesary derivations are all there. There are comparisons to both monocular methods using geometry and methods using known objects. Weaknesses: Most of the tables (2, 4 & 6) with the quantitative results do not provide a metric unit and while the results are compared to existing methods, the metrics should be introduced properly. For 4.4 and table 5 the description and or reference of the baseline is missing. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: Please add the metrics to the tables, introduce them and provide a reference for the definition. Add the description of the baseline in 4.4. How does the performance depend on the image content and the sampling of pixels? E.g. what samples are usually providing the best hypotheses and what would intuitively be a minimal image content that would allows the model to predict the incident rays? The same as for depth and normal prediction? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 3 good Contribution: 4 excellent Limitations: Similar to depth and normal prediction there are probably cases where the image content is not sufficient to predict incidence rays. The authors did not discuss this case as far as I can see. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Response to R1 TGwc** We sincerely appreciate the valuable feedback provided by the reviewer to enhance our manuscript quality. We also agree on the importance of properly introducing metric units in tables and providing clear references for baselines. We will address these aspects appropriately. We offer the following responses to the raised questions: **R1, Q1: Missing table metric units.** Thank you for bringing this to us! Tabs. $2$ and $4$ evaluation metrics are at L$31$ of the supplementary. For Tab. $6$, we follow the evaluation protocols of baseline LoFTR. The reported AUC of the pose error is measured at thresholds (5°, 10°, 20°), with the pose error defined as the maximum angular error in rotation and translation. Please refer to Sec. $4.2$ of LoFTR for the detailed evaluation protocols. We have revised our main paper to incorporate these metrics. **R1, Q2: Missing reference to Sec. $4.4$ and Tab. $5$ baselines.** We apologize for the missing details. In Tab. $5$, the baseline shares an identical network structure, with the last layer modified to a fully connected layer for direct regression of the $4$ degrees of freedom (DoF) intrinsic. During training and testing, we normalize the intrinsic matrix by resizing the image height and width in the range [0, 1], which aids model convergence. We compare the baseline and our method's estimated intrinsics using the same evaluation protocol. The baselines in Tab. $6$ of Sec. $4.4$ follow LoFTR in estimating image correspondence. With the estimated correspondences, they apply an OpenCV-based five-point algorithm with RANSAC to estimate the two-view camera pose. **R1, Q3: Calibration performance w.r.t. sampling strategy.** That is a very insightful question! Monocular camera calibration eventually relies on image projective distortion. This leads us to think sampling around image edges could enhance results, as projective distortion is most discernible near borders. We run the following ablation to verify the reviewer's insight. With normalized image coordinates ranging from [-1, 1], we ablate calibration performance through sampling: initially around the image border and then spanning the entire image. We define the sampling area w.r.t. threshold $k$ as: {$ (x, y) \mid |x| \geq k, |y| \geq k, x \in [-1, 1], y \in [-1, 1] $} where $k$ closer to $1$ refers to sampling around the image edges, while $k=0.0$ spans the entire image. We evaluate the MegaDepth, ScanNet, and Waymo datasets, reporting calibration performance with the simple camera assumption applied in Tab. C. The performance at $k=0.0$, thus, aligns with the results in Tab. $2$ of the main paper. Table C. Calibration performance with sampling strategy. | $e_f$ \ $k$ | 0.7 | 0.6 | 0.5 | 0.4 | 0.3 | 0.2 | 0.1 | 0.0 | |:-----------:|:-----:|:-----:|:-----:|:-----:|:-----:|:-----:|:-----:|:-----:| | MegaDepth | **0.106** | 0.107 | 0.107 | 0.107 | 0.108 | 0.108 | 0.109 | 0.109 | | ScanNet | **0.098** | 0.104 | 0.108 | 0.108 | 0.107 | 0.108 | 0.108 | 0.109 | | Waymo | **0.149** | 0.151 | 0.154 | 0.157 | 0.158 | 0.157 | 0.157 | 0.157 | Interestingly, we observe performance improvement when sampling near the border $(k=0.7)$, which partially supports our argument. For instance, the improvement in ScanNet is from $0.109$ to $0.098$. **R1, Q4: Minimal image content to predict the incidence field.** We believe that estimating the incidence field is easier on images with projective distortions. Following this reasoning, we consider object-centric images challenging, as the camera is too close to observe the projective distortion. Yet, a trade-off exists. Although estimating the incidence field in object-centric images is challenging, its accuracy has less impact to 3D sensing. In other words, an accurate 3D structure remains feasible despite noisy intrinsic parameters. A supporting fact is that many 3D object and 3D face modeling works use predefined intrinsic or weak projection to render their 3D model onto 2D images. We give two examples: - [img2pose: Face Alignment and Detection via 6DoF, Face Pose Estimation, CVPR'21] - [Fully understanding generic objects: Modeling, segmentation, and reconstruction, CVPR'21] **R1, Q5: Limitation in image hard to predict Incidence Rays.** We agree! An uncertainty measurement about the incidence field prediction will be beneficial! --- Rebuttal Comment 1.1: Title: Response Comment: Thank you for your clarifications and further insights. After reading the other reviews and comments I will keep my initial positve rating.
null
null
null
null
null
null
Out-of-distribution Detection Learning with Unreliable Out-of-distribution Sources
Accept (poster)
Summary: This paper contributes a new data-generation method to train a OOD detector. The paper proposes to first create an auxiliary task for a generator to generate in-distribution samples and OOD samples by using regions of disjoint latent space (equ 6-7). This can lead to disjoint support set by enforcing a distance preserving loss (equ 8). In order to transfer auxiliary task to real OOD detection, it suggests to use a contrastive learning to bring together generated in-dist samples and real in-dist samples. With these two settings, the OOD detector can learn from generated OOD samples. Strengths: This discussion is complete and clear. The idea of crafting the auxiliary task using generative model(s) is promising in general. By dividing the samples into high and low density regions, this method bypasses the issue of mistaken OOD samples. Weaknesses: The major concern is the performance of the generator (or possibly I miss that piece of information). While section 3 is convincing, in practice (section 4) how to guarantee high MoG density region has high concentration of in-distribution samples. This may happen when a generator does not perform well. Technical Quality: 3 good Clarity: 3 good Questions for Authors: $\bullet$ Can the author change the term predictor to OOD detector? $\bullet$ While it is a good idea to generate in-dist and OOD samples using high MoG density region, does it also make the task too easy? I imagine that high MoG region samples may look like in-dist samples, while the rest may look like noise. Does the predictor can perform better if it can learn from more difficult (harder to discern) generated in-dist and generated OOD samples? $\bullet$ Does this method consider the case when high MoG density region still have some OOD samples? $\bullet$ The performance gain of the (OOD) predictor may be comparable to the generative model used for auxiliary task. Can the auxiliary generative model(s) be used for OOD detection? Can you provide its (or their) performance? If the generative model(s) performs worse, what could contribute to the advantage of the predictor (maybe the contrastive learning loss or maybe the auxiliary task)? Can you identify where the advantage comes from? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The paper states there is a limitation discussion. Where is it? Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely thank you for your constructive comments and generous supports! Please find our responses below. > Q1. The major concern is the performance of the generator (or possibly I miss that piece of information). While section 3 is convincing, in practice (section 4) how to guarantee high MoG density region has high concentration of in-distribution samples. This may happen when a generator does not perform well. A1. Sincerely apologize for your confusion. We would like to address your concerns as follows. - **The performance of the generator does not matter.** Our ATOL is different from previous data generation-based methods in that we do not assume the generated data are reliable. Then, even with the unreliable data, they can still benefit OOD detection when it satisfies Conditions 1-2 and Eq. 5. Therefore, we do not need to care about the performance of the generator, making our proposal more attractive than previous works in data generation-based OOD detection. - **Auxiliary data can differ from real data in the input space.** Generally, any data distribution can benefit our predictor if they can be separated into two disjoint supports, playing the role of auxiliary ID and OOD data, respectively (i.e., Condition 1 should be satisfied). Therefore, if a high MoG density region does not correspond to ID data (which is often the case, as we demonstrated in Figures 4 to 9 in Appendix E.8), it can still benefit our predictor following Proposition 1. In summary, the key motivation of our ATOL is that: **unreliable data (i.e., auxiliary ID and OOD data) in the input space can still benefit OOD detection, if we align distributions between auxiliary and real data w.r.t. the predictor (i.e., Condition 2).** > Q2. Can the author change the term predictor to OOD detector? A2. As a common practice [1,2], OOD detector is defined by both the predictor and the scoring function. In our paper, we emphasize our focus on the training procedure for the predictor, thus using "predictor" more commonly than "OOD detector". We will further consider the terminologies in our revision, trying to make descriptions clearer. > Q3. While it is a good idea to generate in-dist and OOD samples using high MoG density region, does it also make the task too easy? I imagine that high MoG region samples may look like in-dist samples, while the rest may look like noise. Does the predictor can perform better if it can learn from more difficult (harder to discern) generated in-dist and generated OOD samples? A3. Yes. Using hard samples in auxiliary data seems to be a promising way, which may boost detection performance and training efficiency. However, there may still be a lack of a principal way of selecting those data in low density regions, and thus studying how to make the data selection more effective remains an open question. Your constructive suggestions will enlighten us to further improve ATOL, and thanks again for your comments. > Q4. The performance gain of the (OOD) predictor may be comparable to the generative model used for auxiliary task. Can the auxiliary generative model(s) be used for OOD detection? Can you provide its (or their) performance? If the generative model(s) performs worse, what could contribute to the advantage of the predictor (maybe the contrastive learning loss or maybe the auxiliary task)? Can you identify where the advantage comes from? A4. Sincerely apologize for the confusion. We would like to address your concerns as follows. Related discussions will be added in our revision. - **Generators cannot be used for OOD detection.** In our current realization, the generator is used to generate auxiliary OOD data. It is either randomly initialized or pre-trained on ID data, further satisfying the distance-preserving constraint in Eq. 8. Therefore, the generator in ATOL is not trained particularly for OOD detection, which is hard to be used for OOD detection. However, we also believe that using the generator to detect OOD detection is attractive, which will motivate our following studies. - **A poor generator can still benefit OOD detection.** Our ATOL is different from previous data generation-based methods in that we do not assume the generator is reliable in generating high-quality data. Especially, in Section 3, we demonstrate that even if the generated data are not reliable, they can still benefit the predictor if Conditions 1-2 and Eq. 5 are satisfied. We also provide the experimental supports in Section 5.3, justifying that a randomly initialized generator can still benefit our method to improve OOD detection. Therefore, the main advantages of our improvements come from the introduced conditions in Section 3 and the learning strategies in Section 4 instead of the adopted generators. > Q5: The paper states there is a limitation discussion. Where is it? A5. Due to space limitations, our discussion about limitations is given in Appendix F. Here, we list two factors that may motivate our future works. First, our current realization of ATOL is relatively intricate, requiring training constraints on both the generator and the predictor. Further studies will explore more advanced conditions that can ease our realization and further reduce computing costs. Second, we observe that the diversity of generated data is closely related to the final performance (cf., Appendix D.6). However, in our current version, we do not consider diversity for the generator in either theories or algorithms, which will motivate our following exploration. [1]: Weitang Liu, et al. "Energy-based out-of-distribution detection." NeurIPS 2020. [2]: Du Xuefeng, et al. "Vos: Learning what you don't know by virtual outlier synthesis." ICLR 2022. --- Rebuttal Comment 1.1: Title: Thank you for your reply! Comment: Thank you for your reply! First I completely agree with other reviewers that this draft is not very well written because the main points are not clearly answered. I would like to summarize my idea here and confirm with the authors. The main point is that, if the auxiliary generative models could achieve C1 which needs the disjoint support set for gen-in vs gen-OOD, and C2 which needs the gen-in is close to real-in. Then this framework will work. Moreover, based on what the author replies, then how to make sure the sets of gen-in and gen-OOD disjoint while gen-OOD more difficult could make the results better. Can the author(s) confirm the above summary? Thank you! --- Reply to Comment 1.1.1: Title: Thank you for your follow-up comments! Comment: Sincerely thanks for your follow-up comments! Please find our responses below. > Q1. First I completely agree with other reviewers that this draft is not very well written because the main points are not clearly answered. A1. Sincerely apologize for your confusion. Here, we would like to summarize the motivation of our paper further, hoping that it can help you and other reviewers better understand our proposal. 1. **Condition 1: auxiliary ID and OOD data should be disjoint in the data space.** To make the unreliable generator benefit our predictor in a reliable way, we find that auxiliary ID and OOD data should be disjoint in the data space. In this case, if auxiliary ID data can play the role of real ones w.r.t. the predictor (cf., Condition 2), auxiliary OOD data can also be reliable w.r.t. the predictor (cf., Definition 1). Since these auxiliary OOD data satisfy the standard definition of OOD data following [1], i.e., they have the disjoint support over real ID data. In realization, Eqs. 6-8 are adopted to ensure the disjoint supports between auxiliary ID and OOD data. 2. **Condition 2: auxiliary ID data can differ from real ID data in the data space.** As demonstrated in Appendix E.8, generated auxiliary ID data differ from real ID data in semantics/styles. In this situation, our method can still work if the predictor makes no difference between auxiliary and real ID data in their representations (i.e., Condition 2). More extremely, even with the randomly initialized generator, the completely noisy data can still benefit our ATOL (cf., Table 3). In realization, Eq. 9 ensures that the auxiliary ID data are aligned with the real ID data in the representation space of the predictor. In summary, **although auxiliary ID and OOD data are not reliable due to the unreliable generator, they can still benefit OOD detection if we can make the predictor "believe" they are reliable**, i.e., Conditions 1-2 and Proposition 1. We will refine our presentation to enhance the readability of our paper in the revision. > Q2. I would like to summarize my idea here and confirm with the authors. The main point is that, if the auxiliary generative models could achieve C1 which needs the disjoint support set for gen-in vs gen-OOD, and C2 which needs the gen-in is close to real-in. Then this framework will work. A2. Sincerely thank you for the high-level summary of our paper, and your interpretation is completely right. When C1-2 are achieved, we can prove that the Proposition 1 holds. Therefore, even with unreliable OOD sources given by generative models, they can still benefit our models to improve OOD detection. > Q3. Moreover, based on what the author replies, then how to make sure the sets of gen-in and gen-OOD disjoint while gen-OOD more difficult could make the results better. A3. Yes. Your suggestions are quite insightful, pointing out an important direction that can help us further improve ATOL. It will motivate our following studies, and we sincerely thank you for your constructive comments. We will update the related discussion in our revision. We always welcome your new suggestions or comments!
Summary: The paper propose to fix the mistaken OOD generation issue in generative model based approach to out-of-distribution data detection, where the mistaken OOD generation means generated OOD data have semantics of ID data. To fix this issue, auxiliary task-based OOD learning (ATOL) is proposed, which is claimed to have the effect of satisfying two key conditions, i.e., auxiliary ID and auxiliary OOD data have disjoint supports in the input space, and auxiliary OOD data are reliable. To achieve this goal, ATOL adds auxiliary task learning loss and ID distribution alignment loss to the real task learning loss. The empirical study shows the non-trivial improvement of OOD detection performance when using ATOL. Ablation study further confirms the effectiveness of each loss in ATOL. Strengths: The motivation to fix the mistaken OOD generation makes sense to me as shown in Fig. 1. The two new losses in the proposed ATOL are reasonable and are clearly presented. I have to say that I like the empirical studies in the paper since they are quite comprehensive and strong, though many interesting results are presented in the supplementary as a result of page limits. Weaknesses: The drawback and strength of generative model based OOD detection and its comparison with other approaches like scoring or regularized training is not fully discussed in the paper, e.g., performance and efficiency. In Table 17, the benefit of ATOL is not quite clear when compared with ReAct and CSI. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: 1. What is M(z) in Equation 6 and 7? Is it density function of MoG? 2. It is kind of misleading to make the ATOL result bold in Tab. 17 since it is not the best result in terms of AUROC. The best one should be CSI. 3. What is the proportion in Fig. 1b and how is it computed? I tried to search for this information in the paper but it seems that there is no such information. I believe this information is quite important to motivate this paper. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: Limitations are not discussed in the main paper (please correct me if I am wrong about this). Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely thank you for your constructive comments and generous supports! Please find our responses below. > Q1: The drawback and strength of generative model based OOD detection and its comparison with other approaches like scoring or regularized training is not fully discussed in the paper, e.g., performance and efficiency. A1: **Data Generation-based Methods vs. Post-hoc Methods**. Post-hoc OOD detection establishes the basis of OOD detection, where various scoring functions in post-hoc methods can be used to improve data generation-based methods. For example, In Appendix E.6, we test ATOL with different scoring strategies, demonstrating that proper scoring functions can lead to improved performance for data generation-based OOD detection. Therefore, post-hoc methods are typically orthogonal to data generation-based approaches. **Data Generation-based Methods vs. Regularization-based Methods.** Data generation-based OOD detection improves traditional regularization-based methods, in which we do not require real OOD data to fine-tune our predictor. However, due to mistaken OOD generation, previous generation-based methods can make mistakes. It motivates our ATOL to improve conventional data generation-based methods, leading to superior performance over both data generation-based and regularization-based methods. **Experimental Comparison.** In the main context, our primary focus is on comparing data generation-based approaches. However, we also conduct more extensive evaluations in Appendix E (e.g., Tables 15 to 17), comparing with regularization-based and post-hoc approaches. As we can see, Our ATOL outperforms these approaches, revealing our superiority. Moreover, we also compare the training time for a set of representative methods on CIFAR-100 (similar results on CIFAR-10), summarizing the results in the following table. As we can see, our method demonstrates promising performance improvement over other methods with acceptable computational resources. We will add the related discussion in the Appendix. | Methods | FPR95 | AUROC |Training Time / Epoch| | -------- | -------- | -------- | -------- | | CSI | 80.08 | 85.23 | 98.88 | | LogitNorm | 63.45 | 80.18 | **25.16** | | VOS | 75.41 | 78.20 | 38.97 | | NPOS | 62.72 | 84.17 | 61.54 | | ATOL | 55.22 | 87.24 | 58.33 | | ATOL-S | 43.18 | 88.43 | 70.28 | | ATOL-B | **36.72** | **91.33** | 65.33 | > Q2: In Table 17, the benefit of ATOL is not quite clear when compared with ReAct and CSI. A2: In Table 17, we conduct experiments on the ImageNet benchmark, one of the most challenging setups in OOD detection (due to its vast semantic space) [1,2]. Therefore, about $5.55\%$ to $6.28\%$ improvements of our ATOL over CSI and ReAct is relatively promising. How to improve data generation-based OOD detection for tasks with large semantic space remains an open question, which will motivate our future studies. > Q3: What is M(z) in Equation 6 and 7? Is it density function of MoG? A3: Yes, $\mathcal{M}(\mathbf{z})$ in Eqs. 6-7 represents the density function of MoG, namely, $\mathcal{M}(\mathbf{z}) = \sum_{i=1}^c \frac{1}{c} \mathcal{N}(\mathbf{z}|\boldsymbol{\mu}_i, \boldsymbol{\sigma}_i)$, with $\boldsymbol{\mu}_i$ the mean and $\boldsymbol{\sigma}_i$ the covariance for the $i$-th sub-Gaussian. We will make it clearer in our revision. > Q4: It is kind of misleading to make the ATOL result bold in Tab. 17 since it is not the best result in terms of AUROC. The best one should be CSI. A4. Sincerely apology for our mistakes. We will modify the related parts in our revision. > Q5: What is the proportion in Fig. 1b and how is it computed? I believe this information is quite important to motivate this paper. A5. We apologize for the missing description. The proportion in Figure 1(b) is the percentage of ID data mixed in the OOD data, reflecting the severity of wrong OOD data during training. We will describe more about the experimental setups for Figure 1 in the Appendix. > Q6: Limitations are not discussed in the main paper (please correct me if I am wrong about this). A6. Due to space limitations, our discussion about limitations is given in Appendix F. Here, we list two factors that may motivate our future works. First, our current realization of ATOL is relatively intricate, requiring training constraints on both the generator and the predictor. Further studies will explore more advanced conditions that can ease our realization and further reduce computing costs. Second, we observe that the diversity of generated data is closely related to the final performance (cf., Appendix D.6). However, in our current version, we do not consider diversity for the generator in either theories or algorithms, which will motivate our following exploration. [1]: Rui Huang, et al. "MOS: towards scaling out-of-distribution detection for large semantic space." CVPR 2021. [2]: Dan Hendrycks, et al. "Scaling out-of-distribution detection for real world settings." ICML 2022. --- Rebuttal Comment 1.1: Title: After Rebuttal Comment: Thanks for the response, I will keep my score. --- Reply to Comment 1.1.1: Title: Thanks for supporting our paper to be accepted. Comment: Dear Reviewer eqWq, We will include these discussions in our revision to improve our submission. We sincerely thank you for supporting our paper to be accepted! Best regards, Authors of Submission #560
Summary: The paper tries to overcome the impact of directly applying incorrect OOD data on the OOD model through auxiliary tasks, thereby improving the performance of OOD tasks. The theoretical part is hard to follow, and the experimental part proves the effectiveness of the theory. Strengths: 1. The paper introduces an auxiliary OOD detection task to combat mistaken OOD generation. 2. The proposed method requires a small additional calculation cost. 3. Experiments show the effectiveness of the proposed method. Weaknesses: 1. The proof of C2 is not clear enough to allow me to clearly understand why achieving C2 can better utilize OOD data. If the model is strong enough or fully trained, it can still confuse incorrect OOD data even if formula 5 is met. Therefore, I judge that the training steps of the algorithm should not be too many and the model should not be too large. Does the author have an explanation for this aspect? 2. There is a writing error in part b of Formula 10. 3. Many aspects of the experiment followed the settings of reference [31], but why not include them in the comparison? 4. The paper is really hard to follow, please polish the paper carefully. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: There are too many Mathematical notations. It is suggested to sort out a table to make it clearer. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely thank you for your constructive comments and generous supports! Please find our responses below. > Q1. The proof of C2 is not clear enough to allow me to clearly understand why achieving C2 can better utilize OOD data. A1. Thank you for your valuable concern. We would like to answer your questions as follows. The related discussions will be added in our revision, especially for Section 3. - **How to interpret Condition 2?** Measured in the embedding space of the predictor, Condition 2 states that auxiliary ID data approximately follow the same distribution as that of the real ID data. Therefore, a proper predictor should be trained to satisfy such a condition, motivating Eq. 9 in our realization. - **Why is Condition 2 important?** Auxiliary ID/OOD data can arbitrarily differ from real ID/OOD data in the data space, i.e., auxiliary data are unreliable OOD sources. However, if Condition 2 is satisfied, the model "believes" the auxiliary ID data and the real ID data are the same. Then, the auxiliary OOD data will have the disjoint support over the real ID data and thus achieve reliability w.r.t. the predictor (cf., Proposition 1). Therefore, Condition 2 is critical for our ATOL. > Q2. If the model is strong enough or fully trained, it can still confuse incorrect OOD data even if formula 5 is met. Therefore, I judge that the training steps of the algorithm should not be too many and the model should not be too large. Does the author have an explanation for this aspect? A2. Many thanks for your question. Conditions 1-2 ensure that unreliable sources will not mislead our predictor in OOD detection. In fact, if the model is stronger, we can better satisfy Condition 2 and Eq. 5. Then, according to Proposition 1, auxiliary OOD data are more likely to be reliable and will not confuse our predictor. The following experimental results on CIFAR can support our above claims, where using a more complex model (i.e., DenseNet-121) can lead to better performance in OOD detection. We will add the related discussion below Proposition 1. CIFAR-10 |Method|SVHN||LSUN-Crop||LSUN-Resize||iSUN||Texture||Places365|| Average|| |:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:| ||FPR95|AUROC|FPR95|AUROC|FPR95|AUROC|FPR95|AUROC|FPR95|AUROC|FPR95|AUROC|FPR95|AUROC| |DenseNet-121|18.05|96.27|1.45|99.58|4.35|99.88|4.45|98.86|20.90|95.70|25.85|94.39|12.51|97.28| |WRN-40-2|20.60|96.03|1.48|99.59|5.20|98.78|5.00|98.76|26.05|95.03|27.55|94.33|14.31|97.09| CIFAR-100 |Method|SVHN||LSUN-Crop||LSUN-Resize||iSUN||Texture||Places365|| Average|| |:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:| ||FPR95|AUROC|FPR95|AUROC|FPR95|AUROC|FPR95|AUROC|FPR95|AUROC|FPR95|AUROC|FPR95|AUROC| |DenseNet-121|70.30|85.96|61.55|87.93|22.15|95.87|22.30|95.59|48.30|87.87|77.15|79.28|50.29|88.75| |WRN-40-2|70.85|84.70|13.45|97.52|51.85|90.12|55.80|89.02|63.10|83.37|75.30|78.86|55.06|87.26| > Q3. There is a writing error in part b of Formula 10. A3. Thank you for your kind correction. We will correct Eq. 10 in our revision. > Q4. Many aspects of the experiment followed the settings of reference [31], but why not include them in the comparison? A4. In our evaluation, we **have conducted** experiments for Free Energy (the method suggested in reference [31] of our paper). We will make it clearer in our revision. > Q5. The paper is really hard to follow, please polish the paper carefully. A5. Sincerely apologize for your confusion. Following your kind suggestion, we will summarize the key concepts and heuristics in our revision. Besides, we will add the notation table in Appendix and remarks for theories in Section 3, hoping they can help you better understand our paper. If you have any further suggestions, we look forward to discussion and are happy to answer any questions that might arise. > Q6. There are too many Mathematical notations. It is suggested to sort out a table to make it clearer. A6. Thank you for the kind suggestion. We summarize the adopted notations in the following tables. We will add the mathematical notations table in the Appendix. |Notation|Description| |:--:|:--:| |**Spaces**|| |$\mathcal{X}$ and $\mathcal{Y}$|the data space and the label space |**Distributions**|| |$\mathcal{P}\_{\text{X,Y}}^{\text{ID}}$ and $\mathcal{P}\_{\text{X}}^{\text{ID}}$|the joint and the marginal real ID distribution| |$\mathcal{P}\_{\text{X}}^{\text{OOD}}$|the marginal real OOD distribution| |$\mathcal{G}\_{\text{X,Y}}^{\text{ID}}$ and $\mathcal{G}\_{\text{X}}^{\text{ID}}$|the joint and the marginal auxiliary ID distribution| |$\mathcal{G}\_{\text{X}}^{\text{OOD}}$|the marginal auxiliary OOD distribution| |$\mathcal{M}\_{\text{Z}}$|the specified MoG distribution| |$\mathcal{U}\_{\text{Z}}$ |the specified uniform distribution| |**Data**|| |$\mathbf{x}\_{\text{ID}}$ and $y\_{\text{ID}}$ |the real ID data and label| |$\hat{\mathbf{x}}\_{\text{ID}}$ and $\hat{y}\_{\text{ID}}$|the auxiliary ID data and label| |$\hat{\mathbf{x}}\_{\text{OOD}}$|the auxiliary OOD data| |$\mathbf{z}\_{\text{ID}}$ and $\mathbf{z}\_{\text{OOD}}$|the latent ID data and the latent OOD data| |$\mathcal{Z}^{\text{ID}}$ and $\mathcal{Z}^{\text{OOD}}$|the latent ID data sets and the latent OOD data sets| |**Models**|| |$\mathbf{h}$|the predictor: $\mathbb{R}^{n} \rightarrow \mathbb{R}^c$| |$\boldsymbol\phi$ and $\boldsymbol\rho$|the feature extractor and the classifier| |$s(\cdot;\mathbf{h})$ |the scoring function: $\mathbb{R}^{n}\rightarrow \mathbb{R}$| |$f_{\beta}(\cdot)$ | the OOD detector: $\mathbb{R}^{n}\rightarrow \\{\text{ID}, \text{OOD}\\}$, with threshold $\beta$| |$G$ |the generator: $\mathbb{R}^m\rightarrow\mathbb{R}^n$| |**Loss and Function**|| |$\ell_{\text{CE}}$ and $\ell_{\text{OE}}$|ID loss and OOD loss| |$\ell_{\text{reg}}$|the generator regularization loss| |$\ell_{\text{align}}$|the alignment loss| |$\boldsymbol{\phi}'(\cdot)$|the mapping function| |$\mathcal{M}(\cdot)$|the density function of MoG| --- Rebuttal 2: Title: Looking forward to your responses or further suggestions/comments! Comment: Dear Reviewer 3oBY, We have addressed your initial concerns regarding our paper (see https://openreview.net/forum?id=87Qnneer8l&noteId=klW9OVRexe). We are happy to discuss them with you in the openreview system if you feel that there still are some concerns/questions. If you have more suggestions, please tell us. We will merge them into our revision as well! Best regards, Authors of Submission \#560
Summary: One of the techniques for detecting OOD instances is to train a model on OOD data. However, that task is not easy due to difficulty inherent with collecting such OOD data. Rather than collecting such data, this paper proposes instead to generate it, and to train an auxiliary task to improve the OOD detection capabilities of deep learning networks. The proposed approach is called ATOL, for Auxiliary Task-based OOD Learning, and proposes to address one fundamental flaw in existing data-generation based detection methods: the collection of OOD instances from ID data that can mistakenly be labeled as OOD data while being in reality ID. Strengths: This paper addresses an interesting problem that I think is quite overlooked in the research community. Indeed, many studies focus on collecting OOD instances without necessarily evaluating the possible side effects of the collected data. This paper identified that instances that are collected and labeled as OOD instances can in fact be ID instances, which could lead to poor generalization performance on ID data, and poor OOD detection capabilities. The approach the authors devised to address the problem is sound and quite intuitive, and the evaluation is quite strong. Weaknesses: Although this paper addresses the OOD detection problem from a data-generation perspective, I would have very much liked to see how their approach fair with other techniques like distance-based OOD detection methods. Some interesting distance-based OOD detection mechanisms have been proposed in the recent past. For instance, CIDER [1] has achieved state-of-the-art performance OOD detection capabilities that shouldn't be overlooked by the researchers approaching OOD detection from a data-generation perspective. This would help educate mainstream readers more on exactly what techniques to pursue to robustify their models against OOD samples. A more fundamental limitation of this study is the fact that it heavily relies on a mixture of Gaussian to decide on what latents to consider as OOD and which ones to consider as ID. As MoGs can be sensitive to outliers, have a rather limited expressive power, their accuracy needs to be presented in the paper to showcase their effectiveness in helping collect the data to train the auxiliary task on. [1]: How To Exploit Hyperspherical Embeddings For OOD detection? https://arxiv.org/pdf/2203.04450.pdf Technical Quality: 3 good Clarity: 3 good Questions for Authors: Based on the limitations I raised above, I would suggest the authors to perform a comparative study with some of the distance-based OOD detection approaches like CIDER. Additionally, evaluating the performance of the MoGs they used could help validate further the effectiveness of their approach. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely thank you for your constructive comments and generous supports! Please find our responses below. > Q1. Although this paper addresses the OOD detection problem from a data-generation perspective, I would have very much liked to see how their approach fair with other techniques like distance-based OOD detection methods, e.g., CIDER. A1. In the main context, we mainly compare our ATOL with other data generation-based approaches. However, we also conduct more evaluations in Appendix E (e.g., Tables 15 to 17), comparing with regularization-based and post-hoc approaches. We include several representative distance-based approaches in these experiments, such as KNN [1] and CSI [2]. As shown in Appendix E, Our ATOL outperforms these approaches, revealing our superiority. Following your kind suggestions, we further compare with CIDER and ViM [3] (both distance-based methods) on CIFAR-10/100 benchmarks, summarizing the results in the following tables. **Comparing with the SOTA and vital baseline CIDER, our ATOL can still reveal superior results**. We will add the related discussions for CIDER and other distance-based methods to Appendix E in our revision. CIFAR-10 | Method | SVHN | | LSUN-Crop | | LSUN-Resize | | iSUN | | Texture | | Places365 | | Average | | |:----------:|:-----:|:-----:|:---------:|:-----:|:-----------:|:-----:|:-----:|:-----:|:-------:|:-----:|:---------:|:-----:|:-------:|:-----:| | | FPR95 | AUROC | FPR95 | AUROC | FPR95 | AUROC | FPR95 | AUROC | FPR95 | AUROC | FPR95 | AUROC | FPR95 | AUROC | | ViM | 56.97 | 89.77 | 49.96 | 91.43 | 63.54 | 88.15 | 62.20 | 88.47 | 45.20 | 91.76 | 47.86 | 91.45 | 54.29 | 90.17 | | CIDER | **5.86** | **98.36** | 7.35 | 98.50 | 47.58 | 93.64 | 47.15 | 93.60 | 28.04 | 94.79 | 41.10 | 91.03 | 29.51 | 94.99 | | Our Method | 12.75 | 96.92 | **4.60** | **98.92** | **0.65** | **99.78** | **0.55** | **99.83** | **10.25** | **97.12** | **22.85** | **94.80** | **8.61** | **97.90** | CIFAR-100 | Method | SVHN | | LSUN-Crop | | LSUN-Resize | | iSUN | | Texture | | Places365 | | Average | | |:----------:|:-----:|:-----:|:---------:|:-----:|:-----------:|:-----:|:-----:|:-----:|:-------:|:-----:|:---------:|:-----:|:-------:|:-----:| | | FPR95 | AUROC | FPR95 | AUROC | FPR95 | AUROC | FPR95 | AUROC | FPR95 | AUROC | FPR95 | AUROC | FPR95 | AUROC | | ViM | 72.32 | 82.92 | 74.03 | 81.57 | 84.89 | 77.03 | 84.15 | 76.69 | 33.51 | 91.72 | 64.17 | 79.57 | 68.78 | 81.58 | | CIDER | **52.21** | 88.44 | 46.88 | 90.18 | 52.23 | 89.89 | 47.57 | 89.91 | 84.67 | 70.62 | 84.67 | 71.82 | 61.37 | 83.48 | | Our Method | 54.65 | **89.69** | **43.95** | **91.27** | **7.80** | **98.57** | **9.60** | **98.13** | **37.45** | **89.51** | **66.90** | **80.82** | **36.72** | **91.33** | > Q2. A more fundamental limitation of this study is the fact that it heavily relies on a mixture of Gaussian to decide on what latents to consider as OOD and which ones to consider as ID. As MoGs can be sensitive to outliers, have a rather limited expressive power, their accuracy needs to be presented in the paper to showcase their effectiveness in helping collect the data to train the auxiliary task on. A2. Sincerely apologize for your confusion. We would like to answer your questions as follows. The related discussion will be added in our revision, especially for Section 3. - **Reliance on MoG Assumption.** MoG just provides a simple way to generate data, yet the key point is to ensure that auxiliary ID/OOD data should have the disjoint support (i.e., Condition 1). Therefore, if Condition 1 is satisfied properly, other noise distributions, such as the beta mixture models and the uniform distribution, can also be used. We will explore different choices of noise distributions in the future. - **Fitting the MoG distribution.** Our ATOL is different from previous data generation-based methods in that we do not require generated data to be reliable in the data space. ATOL does not involve fitting the MoG to real ID data, where the parameters can be pre-defined and fixed. Therefore, we do not consider overfitting and accuracy of MoG in our paper. Although generated data are unreliable in the data space (e.g., auxiliary ID data may differ from its real counterparts), they can still benefit the predictor when Conditions 1-2 and Eq. 5 are satisfied. Heuristically, these conditions and constraints make the predictor take no difference between auxiliary and real ID data. Therefore, auxiliary OOD data are beneficial from the predictor perspective since they have the disjoint support over the real ID data. [1]: Yiyou Sun, et al. "Out-of-distribution detection with deep nearest neighbors." ICML 2022. [2]: Kimin Lee, et al. "A simple unified framework for detecting out-of-distribution samples and adversarial attacks." NeurIPS 2018. [3]: Haoqi Wang, et al. "Vim: Out-of-distribution with virtual-logit matching." CVPR 2022. --- Rebuttal Comment 1.1: Comment: I thank the reviewers for the detailed rebuttal and the additional results. The results are compelling. I am raising my rating to Weak Accept. --- Reply to Comment 1.1.1: Title: Thanks for supporting our paper to be accepted. Comment: Dear Reviewer Q6fc, Glad to hear that your concerns are addressed well. We will include these experiments in our revision as suggested by you. Thanks for supporting our paper to be accepted! Best regards, Authors of Submission #560
null
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Embracing the chaos: analysis and diagnosis of numerical instability in variational flows
Accept (poster)
Summary: The authors present an analytical framework to quantify the effects of numerical errors on sampling, density and ELBO estimation in variational flows. The framework leverages the shadowing theory to derive error bounds based on the shadowing window sizes and the local Lipschitz constants of the dynamical operators. A procedure to verify whether the shadowing property holds in a given system is provided. Strengths: * [Significance] Numerical analysis is an important and under-studied aspect in flow-based models. This work takes a step forward in this direction and offers a plausible explanation for why fast accumulation of numerical errors do not necessarily lead to bad results. * [Quality] The derivations of the numerical bounds seem to make sense intuitively - although I did not check the math in detail. * [Clarity] The material is presented in a clear and logical manner. I find it easy to follow in general. * [Novelty] There exists prior work that is based on shadowing theory but this work extends the analysis to the computation of density and ELBO estimations. Weaknesses: * My primary concern about this work is mainly on the usefulness of the presented framework in practice. All results are predicated on the basis that the shadowing property is already established. The approach presented in section 5 looks cumbersome and does not seem to scale particularly well. * I do not find the numerical experiments to support the claims very well. They only demonstrate some qualitative traits - sampling, density and ELBO errors are much lower compared to trajectory errors, which coincides with the observation that the derived shadowing window sizes are relatively low. There is not much numerical evidence showing how tight the derived bounds are. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: * The analysis in section 4.3 is predicated on the basis that (ln 181) “when $F$ and $B$ are ergodic and measure-preserving for the target $\pi$”, which is a pretty strong assumption. Do you have ways to establish this? * The horizontal axis in Figure 7 does not have the same range as those in Figures 4 to 6. Is this due to computational constraints, i.e. the proportionality to $N$? * It appears that $\epsilon$ is still growing at the end while the sampling, density and ELBO errors all seem to have converged. Do you have any comments on the qualitative behaviors when extrapolating further? This would imply that the bounds eventually become very loose. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: There is not much explicit discussion on the limitations. The computational cost for establishing and calculating the shadow window size is an obvious one. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for reviewing our manuscript! Please see our point-to-point respoonse as follows. > My primary concern about this work is mainly on the usefulness of the > presented framework in practice. All results are predicated on the basis that > the shadowing property is already established. Please see our general response 5 for a discussion of assumptions and the scope of our work. > The approach presented in section 5 looks cumbersome and does not seem to scale particularly well. Your concern about the complexity and scalability of the approach presented in Section 5 is valid. However, we wish to highlight that the primary focus of this work is to build a theoretical framework to understand the numerical stability of variational flows, and to explain the counterintuitive phenomenon regarding numerical error in a rigorous manner. The present approach in Section 5 should be considered as a diagnostic tool that enables users to verify when our theory can be applied. It's not positioned as an algorithm that would be run each time variational flows are employed. Even so, as discussed in our general response 2, it is likely that a much more efficient algorithm for computing $\lambda$ exists. > "I do not find the numerical experiments to support the claims very well. They > only demonstrate some qualitative traits - sampling, density and ELBO errors > are much lower compared to trajectory errors, which coincides with the > observation that the derived shadowing window sizes are relatively low. There > is not much numerical evidence showing how tight the derived bounds are" Please see our general response 4 for discussion of experiments. > “It appears that $\epsilon$ is still growing at the end while the sampling, density and ELBO errors all seem to have converged. Do you have any comments on the qualitative behaviors when extrapolating further? This would imply that the bounds eventually become very loose.” This is indeed an interesting phenomenon; as mentioned in our general response 3, understanding how $\epsilon$ scales with $N$ in more generality is an important direction of future work. Indeed if $\epsilon$ continues to grow, we expect that eventually the bounds should become loose. Empirically, it seems to grow linearly with a gentle slope, and we suspect the range of $N$ for which these bounds are reasonable is large. But we cannot make a rigorous claim about this at this point. Nonetheless, we believe that our current results offer valuable insights and represent significant contributions to the community. To the best of our knowledge, our theory is the first among the variational flow literature that rigorously explains how numerical error influences the orbit evaluation and downstream tasks in different ways. > “The analysis in section 4.3 is predicated on the basis that (ln 181) “when F and B are ergodic and measure-preserving for the target”, which is a pretty strong assumption. Do you have ways to establish this?” To be clear, these assumptions are not required by all analysis in this section; they are just used to obtain stronger results when the dynamical system targets a particular distribution. Since ergodicity and pi-invariance are the two key conditions for convergence of MixFlows paper [5], and are also quite common in ergodic theory [Eisner15], we studied how shadowing would interact with those conditions in variational methods. Our intention was just to provide a new insight that a dynamical system targeting a particular distribution may behave even better numerically than a general system. Given that they are standard conditions, verifying them is beyond the scope of the present work. However, note that there are well-known measure-preserving systems often used in statistics (e.g. Hamiltonian flows, or an ODE satisfying the Liouville conditions [Neklyudov21]). Although one must discretize ODEs in practice, note that our numerically approximated map $\hat F$ can involve discretization and does not need to be measure-preserving. On the other hand, ensuring ergodicity is challenging in practice, as acknowledged in Section 4.3 of [5]. We agree that verifying ergodicity is a current limitation of MixFlows, but addressing this issue is beyond the scope of our work. [Eisner15] Eisner, T., Farkas, B., Haase, M., and Nagel, R. Operator Theoretic Aspects of Ergodic Theory. Graduate Texts in Mathematics. Springer, 2015. [Neklyudov21] Neklyudov, K., Bondesan, R., and Welling, M. Deterministic gibbs sampling via ordinary differential equations. arXiv:2106.10188, 2021. > “The horizontal axis in Figure 7 does not have the same range as those in > Figures 4 to 6. Is this due to computational constraints, i.e. the > proportionality to N?” This was just a minor oversight on our behalf. We originally chose the range of these plots to illustrate the typically linear (or slower) relationship between the shadowing window size and $N$. We can certainly make the range of $N$ match Figure 6 for the camera ready–thank you for pointing this out! > “There is not much explicit discussion on the limitations. The computational cost for establishing and calculating the shadow window size is an obvious one.” Please see general response 3 for a discussion of limitations, and general response 2 for a discussion of computational cost. --- Rebuttal Comment 1.1: Comment: Thank you for your detailed response. I certainly agree that there is good value (in terms of novelty and significance) in the theoretical framework - that's why I gave a passing score to begin with. However, I do still believe more concrete evidence is needed to demonstrate the practical usefulness. Since these were not provided in the rebuttal, I remain with my previous score of 5. --- Reply to Comment 1.1.1: Title: Thank you Comment: Thank you for your response and review comments -- we appreciate your insights!
Summary: This paper investigated the impact of accumulated numerical error on the sampling, density evaluation, and evidence lower bound (ELBO) estimation in variational flows. It demonstrated that the results produced by flows are not destroyed by the serious numerical instability. To explain this phenomenon, it leveraged shadowing theory to theoretically bound the error of sampling, density evaluation, and ELBO estimation. It also developed a diagnostic procedure that can be used to validate results produced by numerically unstable flows in practice. Strengths: This paper proposed an interesting question that why the accumulated numerical error does not influence the results of sampling, ELBO, etc. The technique is sound, and the analyses are solid. The derived theoretical results could support its claim. Weaknesses: The presentation could be improved, and the notations are hard to follow, see details below. Technical Quality: 3 good Clarity: 1 poor Questions for Authors: 1. What is the definition of "s_k" in theorem 4.1. 2. What is the range of mathcal{X}, is it belong to R or R^N or R\times[ 0,T]? 3. I cannot follow the information in Figure 1(b)(d), e.g., what do you want to show through the dots in these figures? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 1 poor Contribution: 3 good Limitations: The limitations are not fully discussed in this paper, e.g., the theory only provide explanations while it does not inspire new algorithm for generative learning. It will narrow the impact of the theory that practitioners may ignore. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for reviewing our manuscript! Please see our point-to-point respoonse as follows. > What is the definition of $s_k$ in theorem 4.1. "s" stands for "shadowing". $(s_k)_{k = 0}^N$ denotes an exact orbit starting at $s = s_0$ that "shadows" the numerical trajectory (as mentioned in line 132--133). Please refer to our definitions and illustrations of orbits, found between lines 117--126 and in Figure 4. We will make sure to clarify this explicitly in the camera ready version! > What is the range of $\mathcal{X}$, is it belong to $R$ or $R^N$ or $R\times[ 0,T]$? As outlined between lines 63–68, $\mathcal{X} \subseteq R^d$ represents the space on which the variational family $\\{q_\lambda: \lambda \in \Lambda\\}$ is supported. > I cannot follow the information in Figure 1(b)(d), e.g., what do you want to show through the dots in these figures? Figure 1(b)(d) serves to illustrate the impact of floating-point representation errors during flow computations. Ideally, in the absence of numerical errors, the computed orbit (blue scatters) would coincide perfectly with the exact orbit (red scatters). However, based on the figure, even for a relatively small $N$, there's a significant deviation of the numerical orbit from the exact one. > The limitations are not fully discussed in this paper, e.g., the theory only provide explanations while it does not inspire new algorithm for generative learning. It will narrow the impact of the theory that practitioners may ignore. Please see our general response 3 for a discussion of the limitations. Note that the purpose of this work is not to introduce a new methodology, but to provide a novel set of theoretical results that explain a counterintuitive, surprising phenomenon related to numerical errors in variational flows that we often see in practice.
Summary: The paper "Embracing the chaos: analysis and diagnosis of numerical instability in variational flows" investigates the impact of numerical instability on the reliability of sampling, density evaluation, and evidence lower bound (ELBO) estimation in variational flows. The authors treat variational flows as dynamical systems and leverage shadowing theory to elucidate the abnormal behavior: common flows can exhibit a catastrophic accumulation of error, but surprisingly, results produced by flows are often accurate enough for applications despite the presence of serious numerical instability. Strengths: In terms of originality, the paper presents a new approach to understanding the behavior of variational flows in the presence of numerical instability. By treating variational flows as dynamical systems and leveraging shadowing theory, the authors provide theoretical guarantees on the error of sampling, density evaluation, and ELBO estimation. The paper also presents a diagnostic procedure to empirically validate results produced by numerically unstable flows in practice. In terms of quality, the paper is well-written. The authors provide a clear motivation for their work and present their results in a coherent manner. The paper also includes several theorems, which adds to the quality of the work. In terms of clarity, the paper is easy to follow and understand. The authors provide clear explanations of their methods and results, and the paper includes several figures and examples that help to illustrate the concepts presented. In terms of significance, the paper makes mainly two contributions to the field of variational inference. The paper provides a theoretical framework to understand the behavior of variational flows in the presence of numerical instability, which is an important problem in the field. The paper also presents a diagnostic procedure to validate results produced by numerically unstable flows in practice. Weaknesses: One potential weakness of the paper "Embracing the chaos: analysis and diagnosis of numerical instability in variational flows" is that it focuses primarily on theoretical analysis and does not provide as much empirical validation of the proposed diagnostic procedure as would be desirable. While the authors do provide some empirical results to support their claims, it would be beneficial to see more extensive experiments that demonstrate the effectiveness of the proposed diagnostic procedure on more complex problems datasets like Cifar 10. Another potential weakness of the paper is that the theoretical contribution with the concepts of dynamical systems and shadowing theory has an overlap with [1] , which is the main theoretical insight of the paper. Moreover, the theorems in ELBO esitimation and density estimation follows diretcly from the shadowing property. Reference: 1. Paul Tupper. The relation between approximation and shaowing in molecular dynamics. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: 1. Does the shaowing property holds for continuous normalizing flow (empirically or theoretically) ? 2. On theorem 5.1, the important constant $M$ depends on $x \in \mathcal{X}$, right? If so, then we cannot derivative the shaowing property (which is a global property independent of $x$) from theorem 5.1. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: The paper could benefit from a more detailed discussion of the limitations of the proposed approach. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for reviewing our manuscript! Please see our point-to-point respoonse as follows. > “One potential weakness of the paper "Embracing the chaos: analysis and diagnosis of numerical instability in variational flows" is that it focuses primarily on theoretical analysis and does not provide as much empirical validation of the proposed diagnostic procedure as would be desirable. While the authors do provide some empirical results to support their claims, it would be beneficial to see more extensive experiments that demonstrate the effectiveness of the proposed diagnostic procedure on more complex problems datasets like Cifar 10.” Please see our general response 4. > “Another potential weakness of the paper is that the theoretical contribution > with the concepts of dynamical systems and shadowing theory has an overlap > with [1], which is the main theoretical insight of the paper. Moreover, the > theorems in ELBO esitimation and density estimation follows directly from the > shadowing property.” Please see our general response 1. > “Does the shadowing property holds for continuous normalizing flow (empirically or theoretically) ?” This is a good question, and is also something we are exploring now! Generally speaking, shadowing theory also applies to numerical discretization of continuous flows such ODE (see e.g. the Finite shadowing theorem in [Coomes95]). And the downstream tasks of continuous NFs or neural ODEs, e.g., sampling and density evaluation, indeed rely on the numerical discretization of the underlying ODE. Hence, it should be possible to theoretically analyze the error for continuous NFs via shadowing, but we leave a deeper investigation for future work. [Coomes95] Brian A. Coomes, Hüseyin Koçak, and Kenneth J. Palmer. Rigorous computational shadowing of orbits of ordinary differential equations. Numerische Mathematik, 1995. > “On theorem 5.1, the important constant M depends on $x\in \mathcal{X}$, right? If so, then we cannot derive the shadowing property (which is a global property independent of $x$) from theorem 5.1.” This is a very good point! Indeed, Theorem 5.1 involves a value of $M$ that is valid for a single numerical trajectory. Then as long as we know something about the smoothness of $F_n$ locally around that numerical trajectory (e.g. 3rd derivative bound), we can obtain an upper bound on $M$ for that trajectory. But you are right that in order to prove that shadowing holds globally, we would need to bound $M$ for all starting states $x \in \mathcal{X}$. We can obtain such a bound if we make a global smoothness assumption on $F_n$ — e.g., a uniformly bounded 3rd derivative would suffice. We will certainly add a discussion of this in the camera ready. Thank you! On this note, a very interesting potential way to relax this is to investigate “probabilistic shadowing,” where there is a shadowing trajectory only with high probability under the initial distribution. We will add this to our discussion! --- Rebuttal Comment 1.1: Title: Correcting a minor error in our response Comment: We hope to correct a minor error in our response to your last comment. To achieve a universal bound on M for all trajectories, we need a uniform bound on the 2nd derivative of the $F_n$, not the 3rd. --- Rebuttal Comment 1.2: Title: Response from the reviewer Comment: I have no further questions. Hope the authors will address the assumptions of the theorems more carefully when updating the paper. --- Reply to Comment 1.2.1: Title: Thank you Comment: Thank you again for your insightful feedback! We will indeed elaborate on the assumptions of Theorem 5.1 in the revised paper.
Summary: The paper discusses the (non)impact of numerical stability concerns when implementing variational flows, specifically, the robustness of sampling, density evaluation or ELBO computation against the chaotic behaviour of numerical implementations of variational flows. The results are the following: while small perturbations of the input imply large perturbations of the output of variational flows (Section 3), the quality of samples, density evaluations, and ELBOs are rarely affected. The paper explains this phenomenon with "shadowing": More specifically (Section 4), under the assumption of ($\epsilon$, $\delta$)-shadowing, the errors of sampling, density evaluation, and ELBO computation behave like $\epsilon + d_\text{TV}(q_0, \xi_0)$, where $\xi_0$ is the initial condition that shadows $q_0$. Whether or not a flow admits shadowing (and how $\epsilon$ relates to $\delta$) can be quantified by estimating (and bounding) the operator norm of the inverse of a specific operator that depends on the gradient(s) of the flow (Section 5). Numerical experiments show how the shadowing windows are minimal in typical applications of variational flows, even though numerical inaccuracy in implementing the flow is significant. Strengths: Analysing the numerical implementation of variational flows through the lens of dynamical systems is exciting because it logically explains a seemingly unlogical phenomenon. On top of that, the paper is very well written: Although some components are rather technical (e.g. shadowing), the manuscript remains easy to follow. Weaknesses: In general, I like the paper, but Section 5 ("Computation of the shadowing window size") leaves some open questions, especially in comparison to the finite-time shadowing Theorem in [33]. In my view, the following points should be addressed before publication: 1. The algorithm for computing $\lambda = \|A^{-1}\|$ is inaccurate: The implementation explained in Appendix B pretends that one can compute $\nabla F_k$ exactly, but since evaluating $F_k$ suffers from approximation errors, $\nabla F_k$ must as well. For context, compare Appendix B to Section 3.4 (especially, Equation 45) in [33] (reference in the paper). The inaccuracy is unlikely to be crucial but exists. It can (and should) be corrected. Or have I missed something? 2. Why does the paper estimate the operator norm the way it does? In other words, why is a call to the default `eigmin` function superior to the algorithm discussed in Section 3.4 in [33]? The paper mentions that the complexity of `eigmin` is $O(d^3N^3)$, whereas the tailored algorithm on p. 187 in Section 3.4 in [33] appears to cost $O(N d^3)$. Please discuss the numerical cost of estimating the shadowing windows in the experiments (for example, by providing estimates of the run-/wall time). 3. Could the authors please elaborate on how to estimate $M$? Why is estimating $M$ and $\delta$ problem-specific, and estimating $\lambda$ not (line 238)? If estimating $M$ is problem-specific, please discuss estimating $M$ for the problems in Section 6. (I have not found this information in Section 6 and Appendix D.) 4. Sections 5 and 6 seem to focus on the computation of the shadowing window size $\epsilon$. Does this mean all flows considered in this paper automatically have the shadowing property? Is this true for all variational flows (e.g., those discussed by Papamakarios et al.: neural spline, planar, radial, Hamiltonian, ...)? It would be helpful to discuss the existence of shadowing before computing the window size in the experiments (e.g. by estimating $2M\lambda^2\delta$). As mentioned above, these points should be addressed before publication. But I expect all questions to be straightforward to answer, and I will increase my score once this is done. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - This is not a question but more of a non-technical suggestion for the author(s). There have been discussions about potential ethical problems of the "Boston Housing" dataset: https://scikit-learn.org/1.0/modules/generated/sklearn.datasets.load_boston.html. If the corresponding experiment in the paper can be rerun with a different dataset, the longevity of this work may benefit. - Out of curiosity: Why are the sampling bounds propositions but the ELBO and density bounds theorems? - Please correct the complexity statements as commented in Appendix B. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: As mentioned under "weaknesses" above, one potential weakness of the algorithm derived from Theorem 5.1 could be computational feasibility. From the manuscript, it is unclear how much computational power it takes to implement this algorithm. A reader will benefit from learning about this limitation if the algorithm is prohibitively costly for some applications. Other than that, limitations seem to be discussed wherever relevant. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your review and comments! Before we address them, we would like to point out that [33] does not introduce the finite shadowing theorem—[17] does. We assume that when you refer to [33] in your review, you mean [17]. Hopefully we did not misunderstand! [17]: Brian Coomes, Hüseyin Koçak, and Kenneth Palmer. Shadowing in discrete dynamical systems. In Six lectures on dynamical systems, pages 163–211. World Scientific, 1996. [33]: Ivan Dokmanić and Rémi Gribonval. Beyond moore-penrose part i: generalized inverses that minimize matrix norms. arXiv:1706.08349, 2017. > The algorithm for ... This is an excellent point! Initially, we felt detailing the errors in computing $\lambda$ was too nuanced. However, we now agree with your view. We plan to provide a more comprehensive description of computing $\lambda$, including handling the small numerical errors that occur there, in the final text. The fix is very straightforward, as you suggest – we use Eq. (45) in [17]. And it does not affect the results much: indeed if we compute terms in $\nabla F_k(x)$ with a floating-point error $\delta$, this introduces $O(\sqrt{N}\delta)$ Frobenius norm error into our representation of $A$, which then translates to an upper bound on $\lambda$ of roughly $$(1- \delta||\tilde{A}^{-1}||)^{-1}||\tilde{A}^{-1}|| = (1 - O(\sqrt{N}\delta) \tilde\lambda)^{-1} \tilde\lambda.$$ Here $\tilde{A}$ denotes the digital computed $A$ and $\tilde\lambda := ||\tilde{A}^{-1}||$. Substituting values from our experiments, this amounts to an inconsequential relative error of around $(1- 10^{-9})^{-1} \approx 1$ on our minimum eigenvalues $\lambda$. > why does the ... This is also a great question! We didn't claim `eigmin` outperforms the algorithm in Section 3.4 of [17] (originally in [Coomes95] for general dimensions, so for the rest of the response we refer to [Coomes95]). You're correct that [Coomes95] offers better scaling than a direct call to `eigmin`. However, the method in [Coomes95] operates under the heuristic that a dynamical system with shadowing is hyperbolic, or nearly so. Specifically, the 'hyperbolic splitting' discussed in P.413 Appendix B of [Coomes95] (i.e., the choice of $\ell$), and the 'hyperbolic threshold' choice in P. 415 Appendix B of [Coomes95] (i.e., the choice of $p$), assumes that the dynamics is hyperbolic or close to it, which are impractical for our variational flows (our dynamics are general time-inhomogeneous systems). Also, as discussed in Appendix B (line 488--492), this method can potentially lead to a significant overestimation of the shadowing window when the dynamics are not (nearly) hyperbolic. Therefore, we prefer to use a procedure that is simpler and more widely applicable, at the cost of more computation. We found in our experiments that using `eigmin` took just a few seconds in most cases (see our general responses), which was acceptable. We will include the above discussion about the method proposed in [17] in the revised manuscript (likely in the appendix). [Coomes95] Brian A. Coomes, Hüseyin Koçak, and Kenneth J. Palmer. Rigorous computational shadowing of orbits of ordinary differential equations. Numerische Mathematik, 69:401–421, 1995. > Could the authors ... Estimating $M$ can indeed be problem-specific because $M$, by its definition (line 232), requires knowledge of the local Lipschitz smoothness of each flow layer evaluated at the computed orbit. This will necessitate different analyses for distinct flows. A universal approach might involve knowing a uniform upper bound on the 3rd derivative of $F_n$ (see response to Reviewer sEFg). Likewise, determining $\delta$ demands analyzing the single step error for the given flow. As for computing $\lambda$: we can use standard automatic differentiation tools to differentiate $F_n$ and compute $A$ (up to floating point error). At that point, we just need to compute a eigenspectrum for a symmetric blockwise tri-diagonal matrix. Both computations can be done in a generic way without model-specific derivations. > Sections 5 and 6 ... We appreciate your suggestion, and will include discussion about the existence of shadowing in the revised manuscript! It is not true that all variational flows exhibit the shadowing property. Our Theorem 5.1 states a sufficient condition, specifically, $M\lambda \epsilon \leq 1$ for the shadowing property to exist. The practical verification of the shadowing property requires an estimation of the one-step digital computing error $\delta$, the local smoothness constant of the flow $M$, and $\lambda$. Our manuscript describes the computation of $\lambda$ (via an eigenvalue problem for a symmetric blockwise tri-diagonal matrix) and the estimation of $\delta$ (Figure 10). These suffice to compute the shadowing window size. Estimating M can be more difficult, especially for complex flows. While one could analyze the local Lipschitz smoothness of each flow layer, handling the $\sup$ in the definition of M can be difficult. One option is to bound the 3rd derivative of $F_n$ to upper bound the $\sup$ analytically. However, given the range of results already in our manuscript, we leave this analysis for model-specific applications in future work. > This is not a question ... Thanks for pointing this out! We were not aware of this concern. Yes, we will re-run our experiments for the camera ready with another dataset. > Out of curiosity ... The sampling bounds were slightly easier to prove compared to the theorems. We can switch them all to “theorem” for consistency’s sake in the camera ready. > Please correct ... Will do! > As mentioned under "weaknesses" ... Thanks for your suggestion! We will provide more details regarding computing the shadowing window, with a thorough complexity analysis in the camera ready. Please also see our general response 2 for a detailed response. --- Rebuttal Comment 1.1: Comment: Thank you for your reply. I am very sorry for mixing up the references. Indeed, I mean reference [17] -- I have no idea why I mentioned [33] instead. **Regarding $\lambda$:** I agree with everything you said; please include the explanations in the revision. **Regarding `eigmin`:** I concur that a simple, out-of-the-box solution like calling `eigmin` is a good starting point and may be sufficient for most applications. But I think a reader of your paper should know that a more efficient (albeit more complicated) method exists. **Regarding $M$ and sections 5 and 6:** So would you agree with my summary that although it is difficult to determine whether a flow has the shadowing property (because bounding $M$ is difficult), computing the size of the shadowing window is straightforward (because $\lambda$ and $\delta$ are readily available)? In other words, we can always compute the sizes of hypothetical shadowing windows but cannot guarantee that these windows exist. I find this very interesting. However, since -- strictly speaking -- it somewhat limits the helpfulness of Theorem 5.1 in practical applications, this phenomenon strengthens my wish to see more of a discussion of estimating (or at least bounding) $M$. If this is too much work (it may well be), then I would appreciate you at least explaining how one would approach this problem and why it is difficult. What do you think? --- Reply to Comment 1.1.1: Title: Response to further comments Comment: Thank you for your continued response! > Regarding $\lambda$: I agree with everything you said; please include the explanations in the revision. Yes! We will certainly include the explanation in camera ready. > Regarding `eigmin`: I concur that a simple, out-of-the-box solution like calling eigmin is a good starting point and may be sufficient for most applications. But I think a reader of your paper should know that a more efficient (albeit more complicated) method exists. We agree! We will provide an expanded discussion on the method provided in [Coomes95] (the current manuscript contains a brief discussion in Appendix B, line 488–492) and will add a discussion of potentially more scalable algorithms that leverages the sparsity of $AA^T$ as described in the general response 2. > Regarding M and sections 5 and 6: So would you agree with my summary that although it is difficult to determine whether a flow has the shadowing property (because bounding M is difficult), computing the size of the shadowing window is straightforward (because $\lambda$ and $\delta$ are readily available)? You are totally correct. > If this is too much work (it may well be), then I would appreciate you at least explaining how one would approach this problem and why it is difficult. We appreciate your constructive comment about estimating $M$. To clarify, the challenge of demonstrating shadowing is as follows: 1. To demonstrate shadowing for *one* numerical trajectory, you need the supremum of $\nabla^2F_n$ locally around each numerical trajectory point (i.e., $M$ as stated in Theorem 5.1 of our manuscript). 2. To demonstrate shadowing for *all* numerical trajectories (i.e., our Definition 4.1), you need to bound the supremum of $\nabla^2F_n$ globally. As in our response to sFEg and to your previous comment, we can derive bounds as follows. To obtain a bound on $M$ for a particular trajectory, we just need $\nabla^2F_n(\hat{x}_n)$ for each numerical orbit point (e.g., via automatic differentiation), as well as a bound on the norm of the 3rd derivative of $F_n$. For example, if $\|\nabla^3F_n\| \leq B$ uniformly, then $$ M \leq \max_n \{ \epsilon B + \|\nabla^2F_n(\hat{x}_n)\|\}. $$ For a global bound on M, we need a universal bound on the 2nd derivative of $F_n$. For example, if $\|\nabla^2 F_n\|\leq B$ uniformly, then $M \leq B$. Note that in both cases, the uniform bound on the derivative requires an in-depth analysis tailored to the specific flow. It is also possible to estimate the value of $M$ for an individual trajectory by computationally maximizing $\|\nabla^2 F_n\|$ in the $\epsilon$-ball around the numerical orbit points. For example, one could sample points randomly within the $\epsilon$-ball, compute $\|\nabla^2 F_n\|$ for each, and estimate the supremum using those values. However, this approach is very computationally expensive in general, and we would not recommend it generally. However, even without direct analysis of $M$, we can judge how large $M$ would have to be to violate the shadowing condition. In particular, for our experiments, for the shadowing property to be violated (i.e., $M>(\epsilon\lambda)^{-1}.$), $M$ would need an order of magnitude greater than $10^{8}$. This is an unlikely scenario for flows that are reasonably smooth.
Rebuttal 1: Rebuttal: # General response to the reviewers We thank the reviewers for their valuable feedback. In this response, we address shared comments. Specific responses to each reviewer will follow separately. ## 1. Comparison to [23] (MECE, sFEg) Our work is not an extension or competitor to the work of [23]; both focus on different objectives. While [23] establishes a relationship between shadowing and the weak distance between inexact and exact trajectories in dynamical systems, our work provides a theoretical basis for the counterintuitive behavior of numerically-implemented variational flows observed in practice. The overlap is minimal; our Proposition 4.2 echoes the main theorem in [23] with some differences (e.g., Levy-Prokhorov vs. bounded Lipschitz distance, our work does not place assumptions on the initial state distribution). We introduce a substantial set of new results in Theorems 4.3, 4.4, 4.6, 4.7, and Proposition 4.5 dedicated to density evaluation, ELBO estimation, and similar and strengthened results for MixFlows. These theorems are not consequences of [23], employing distinct proof techniques. Our work offers significant contributions even for readers familiar with [23]. We'll elaborate on the relationship with [23] in the camera ready. ## 2. Computational cost of the shadowing window (c44C, e3zg). In our experiments, this was a fairly minor computational cost. For a flow length of N=500, `eigmin` took just a few seconds. Given that this was acceptable in practice, we didn’t code a specialized method. In the camera ready, we will report the computation time for estimating the shadowing window size. Even so, we suspect that $\lambda$ can be computed more efficiently by utilizing the sparsity of $AA^T$: specifically, $AA^T$ is a symmetric positive definite block-tridiagonal matrix with bandwidth $d$ (line 476--477 in Appendix B) and so has $O(Nd^2)$ entries. We are currently investigating more efficient methods for calculating $\lambda$ along these lines, e.g., the inverse power method [Schatzman02, Chapter 13.3.3], or tridiagonalization via Lanczos iterations [Lanczos] followed by divide-and-conquer algorithms [Coakley13]. [Schatzman02] Michelle Schatzman. Numerical analysis: a mathematical introduction. Oxford University Press, USA, 2002. [Lanczos] C. Lanczos: An Iteration Method for the Solution of the Eigenvalue Problem of Linear Differential and Integral Operators, J. Res. Nat. Bur. Stand. 49, 255 (1950) [Coakley13] Ed S. Coakley and Vladimir Rokhlin. A fast divide-and-conquer algorithm for computing the spectra of real symmetric tridiagonal matrices. Applied and Computational Harmonic Analysis, 2013. ## 3. Discussion of the limitations (MECE, sFEg, d5rn, e3zg) We agree there was not enough discussion of limitations in the original submission; given the extra page in the camera ready, we are happy to include a thorough discussion of the limitations of our work, based on the following key points: - Our error bounds mainly address downstream tasks post-training; we do not provide results regarding how numerical instability influences the training process itself. - While our theory is centered around error bounds that are proportional to the shadowing window size $\epsilon$, we’ve observed that $\epsilon$ can grow with $N$. This suggests further study on its theoretical growth rate; we provided an initial discussion of this in the Appendix, but more work needs to be done here. - Our Theorem 5.1 verifies shadowing for a particular trajectory, but needs a global smoothness condition on $F_n$ for global shadowing. To help address this, our current analysis may be extended from the global shadowing property to a weaker version that only holds with high probability. See our response to Reviewer sFEg. - Our theoretical analysis is focused on standard variational flow methods, and is not adapted to recent architectures like continuous normalizing flows or neural ODEs. - In our experiments, we employed a basic method to calculate the minimum eigenvalue of $AA^T$. Given its sparsity, a deeper exploration into more efficient techniques is merited. - Finally, we could include a broader range of variational flows (e.g., RealNVP, GLOW, and Hamiltonian variational flow) in our experiments. ## 4. Lack of numerical experiments (MECE, sFEg, e3zg) We agree that more experiments that cover other types of variational flow would be a nice addition to the paper. We will plan to include results for more types of flow in the camera ready. However, we’d like to emphasize that more experimental results are not central to the contributions of this particular work. The key contributions of this work are theoretical: we provide results that use the shadowing property to explain why the error of variational flows in statistical applications remains controlled, despite the catastrophic error growth in their numerical trajectories (Figure 1(b)(d)). Our experiments are designed just to provide empirical illustration of the theory and investigate the typical shadowing window size. ## 5. Usefulness of the theory (shadowing is a strong assumption) (MECE, e3zg) We respectfully disagree that finite shadowing is an overly strong assumption. Indeed, we believe this is precisely the right assumption to explain the counterintuitive error growth phenomena in many common dynamical systems, even though this assumption of course does not hold for all systems. Further, we provide a sufficient condition to verify shadowing in practice via Theorem 5.1, which makes the strength of the assumption a priori less of an issue. As a side note: even beyond finite shadowing, the *infinite* shadowing property is generic for dynamical systems (informally, there is an infinite shadowing dynamical system “arbitrarily close to” any reasonable dynamical system; see submission Appendix C). This suggests finite shadowing, as a weaker property, should be similarly pervasive.
NeurIPS_2023_submissions_huggingface
2,023
Summary: The paper investigates the impact of numerical instability on the reliability of sampling, density evaluation, and evidence lower bound (ELBO) estimation in variational flows. It demonstrates through empirical examples that numerical instability can lead to deviations in the flow map affecting sampling, density, and ELBO computations. However, the paper also finds that despite numerical instability, results from flows can still be accurate enough for practical applications. The paper treats variational flows as dynamical systems and uses shadowing theory to provide theoretical guarantees on the error of downstream tasks. It also develops a diagnostic procedure to validate results produced by numerically unstable flows in practice. The paper concludes by validating its theory and diagnostic procedure on MixFlow with both synthetic and real data examples. Strengths: * The paper addresses an important and practical problem in the context of variational flows, which are widely used in generative modeling and probabilistic inference. * The empirical examples and experimental validation provide concrete evidence for the theoretical claims made in the paper. * The use of shadowing theory from dynamical systems provides a rigorous framework to understand the behavior of numerically unstable flows. * The diagnostic procedure developed in the paper could be valuable for practitioners using variational flows, allowing them to assess the reliability of their results. Weaknesses: * Due to rather strong assumptions, the paper's claims may not be applicable in all scenarios. The experiments focus only on a few datasets as well as on a specific type of flow (MixFlow). Thus, they may not fully capture the behavior of other types of variational flows or other datasets, which limits the generalizability of the findings. * It should be explained in more detail what novelty the current submission offers compared to the work of [23] (except for an extension to the evaluation of densities and ELBO estimations). Technical Quality: 2 fair Clarity: 3 good Questions for Authors: * It appears that the exponential scaling in the density evaluation is circumvented by looking at *log*-densities? * Can one motivate the assumptions (in particular, $\xi_0=q_0$) for the ELBO result? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 3 good Contribution: 3 good Limitations: Details on limitations are lacking. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for reviewing our manuscript! Please see our point-to-point respoonse as follows. > “Due to rather strong assumptions, the paper's claims may not be applicable in all scenarios.” Please see our general response 5 for a comprehensive discussion of the assumptions. > “The experiments focus only on a few datasets as well as on a specific type > of flow (MixFlow). Thus, they may not fully capture the behavior of other > types of variational flows or other datasets, which limits the > generalizability of the findings.” Thank you for the comment! Although the focus of this paper is theoretical, we are happy to include more experiments in the camera ready; see our general response 4 regarding this comment. > “It should be explained in more detail what novelty the current submission > offers compared to the work of [23] (except for an extension to the > evaluation of densities and ELBO estimations).” Please see our general response 1 for a detailed comparison to [23]. > “It appears that the exponential scaling in the density evaluation is circumvented by looking at log-densities?” This is a very good point! We did not adequately comment on this in the original submission, and will do so in the camera ready. There are two cases to consider here – absolute error and relative error. If one cares about the absolute error of the density, then our result suffices without much additional work. Densities tend to be bounded, while log-densities become unbounded in the tails. Generally, if two distributions are pointwise close in log-density, they will be pointwise close in density as well. Things become slightly trickier for unbounded densities, but we will stick with bounded here for simplicity. If one cares about the relative error of the density, then indeed our result suggests that exponential growth in the relative error is possible. But in this case, this is again a major improvement on what you would expect; recall that the empirical phenomena we observe is that the *trajectories themselves* diverge exponentially. So if the trajectories diverge exponentially quickly, then the relative error could grow *doubly-exponentially*. As a rough example, consider the ratio of two normals: $\mathcal{N}(0,1)$, and $\mathcal{N}(x, 1)$. Then if we consider the map $x \to \exp(x)$ — a toy model for an exponential divergence in pushforward error—we obtain density ratios like $\exp(-\theta^2 + (\exp(x) - \theta)^2)$, which grow doubly exponentially in $x$. > “Can one motivate the assumptions (in particular, $\xi_0 = q_0$) for the ELBO result?” This is a good point, and we will clarify this for the revised manuscript. The assumption $\xi_0 = q_0$ has been made in past work [23], and was made here primarily for convenience of the analysis (lines 175–177). But it is a reasonable assumption for the following reason. We know that $\xi_0$ is close to $q_0$ due to shadowing. And $\xi_0$ is indeed an implicit function of $q_0$; it’s a fixed point of a twice differentiable function involving the whole trajectory starting at $q_0$, see page 176 of [17] (we do not recommend to solve the fixed point equation in practice). The implicit function theorem yields that $\xi_0$ is a differentiable function of the numerical orbit (whose initial distribution is $q_0$). So, the relationship between $q_0$ and $\xi_0$ is not totally pathological. Assuming that $q_0 = \xi_0$ provides simplicity without compromising the insights of the result. > Details on limitations are lacking. Please see our general response 3 for a comprehensive discussion of limitations. --- Rebuttal Comment 1.1: Title: acknowledgment Comment: Thank you for the clarifications. After reading the general response as well as the responses to other reviews, I updated my score under the assumption that both the theoretical explanations, as well as further numerical experiments, will be included in the final version. --- Reply to Comment 1.1.1: Title: Thank you Comment: Thank you again for reviewing our work! We will enhance our theoretical discussions and expand the numerical experiments in the revised manuscript.
null
null
null
null
null
null
Geometry-Informed Neural Operator for Large-Scale 3D PDEs
Accept (poster)
Summary: This paper presents geometry-informed neural operator (GINO), solving computational fluid dynamics (CFD) problems. It combines graph neural operations (GNO) and Fourier neural operators (FNO) to adapt to irregular discretized grids. The authors have tested the model on two large-scale datasets. Strengths: 1. The authors propose combining GNO and FNO to leverage the advantages of both methods, such as analyzing local and global information and efficiently processing irregular grids. They also conduct experiments demonstrating that GINO exhibits discretization invariance over the latent grid and the input-output mesh. 2. The authors have generated two CFD datasets using various vehicle datasets. It would greatly benefit the learning physical simulation community if these datasets were made publicly available upon the paper's acceptance. Weaknesses: 1. The definition of the $\kappa$ operator in the graph operator block is unclear. It is not specified whether it measures the distance between two points or the similarity of their features. Additionally, it would be helpful to know if the $\kappa$ operator has any learnable parameters. Providing more details in the paper would make it self-contained and enable readers to understand the methodology. 2. The paper mentions the input of SDF features and surface points. It would be beneficial to clarify if these two types of data are fed in a uniform manner. For instance, are the point locations associated with their corresponding SDF values, while the surface points are assigned a value of 0? 3. The authors claim efficient graph construction in the paper. However, it appears that once the graph operator block finishes processing the input, it results in a regular grid. It would be important to address whether the implementation takes this into consideration and ensures the efficiency of the graph construction process. Technical Quality: 3 good Clarity: 3 good Questions for Authors: See weakness. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: See weakness. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ## Response to Reviewer zgWV > **Q1:** The definition of the κ operator in the graph operator block is unclear. It is not specified whether it measures the distance between two points or the similarity of their features. Additionally, it would be helpful to know if the κ operator has any learnable parameters. Providing more details in the paper would make it self-contained and enable readers to understand the methodology. **A1:** Thank you for the comments. The κ operator in GNO learns the transform of meshes. It can be viewed as a learned interpolation, where the kernel measures the weights (similarity) between two points. The Radial basis function (RBF) interpolation is one simple example of the kernel. In GNO blocks, the kernel function k is parameterized as a neural network. Its weights are learnable parameters. As shown in the table, the learnable GNO encoder-decoder outperforms fixed interpolation. We will try to add more backgrounds and make it self-contained. > **Q2:** The paper mentions the input of SDF features and surface points. It would be beneficial to clarify if these two types of data are fed in a uniform manner. For instance, are the point locations associated with their corresponding SDF values, while the surface points are assigned a value of 0? **A2:** Thanks for the question. The SDF features are measured on the uniform latent space (64x64x64), and the surface points are a point cloud of the geometry. The SDF is directly fed to the latent FNO model, while the surface is fed to the encoder GNO. They don't have to have the same format. We will add these clarifications to the paper. > **Q3:** The authors claim efficient graph construction in the paper. However, it appears that once the graph operator block finishes processing the input, it results in a regular grid. It would be important to address whether the implementation takes this into consideration and ensures the efficiency of the graph construction process. **A3:** Thanks for the suggestion. Since the ending mesh is a uniform grid, it's possible to round up the node in the starting mesh and find out its neighbors. In our implementation, we used a hash-table-based graph construction, which is similar to the rounding process. --- Rebuttal Comment 1.1: Comment: Read the rebuttal, maintain the same rating.
Summary: This paper introduces a novel approach for applying Fourier or other Neural operators to complex geometries by prepending a “learnable projection step” via Graph Neural Operator (GNO). Unlike previous methods that morphe complex geometries into regular domains, this approach projects (learnable) sampled nodes onto nearby regular grids, providing advantages such as discretization invariance and reduced computational overhead by sub-sampling. However, the method is limited to simple geometries and fails to account for variations in geometry and subtle geometry features. **The paper also lacks sufficient datasets and comparisons with relevant methods, such as the GNN family.** While the ideas are somewhat novel, though not groundbreaking, **the paper requires further evaluation to demonstrate its strengths and limitations.** **A borderline rejection is assigned, with reconsideration if the issues are addressed during the revision period.** ## After rebuttal - The authors improved their references, presentation, and empirical evaluation; Hence I increased my score to 5 Strengths: - Discretization invariant - Improved efficiency and scalability by sub-sampling - Improved empirical performance Weaknesses: - Insufficient number of datasets and comparisons: - More datasets should have been included, such as cylinders and airfoils from [1]. - Comparisons with representative methods from the GNN family, such as MeshGraphNet[1], MSGNN-Grid[2], and BSMSGNN[3], should have been discussed or ideally conducted. - Among the mentioned papers [1] to [3], a crucial comparison would be with [2], which also utilizes a background grid as a helper but differs in the backbone as it did not use FNO. This would have provided a valuable benchmark for evaluation. - The overly ambitious illustrations: such as the claim in the abstract that "...(the method) can be applied to any geometry", which is not the case. A more objective approach in illustrating both the strengths and limitations would be appreciated by readers. - Lack of clarity in dataset presentation and results: - Fig.2 suggests that the shapes are overly simplistic and the pressure distributions appear uniformly smooth, indicating ease of learning. It is essential for the author to provide a more comprehensive and clear explanation, including typical examples of datasets and variations in geometry among examples. [1] Pfaff, Tobias, Meire Fortunato, Alvaro Sanchez-Gonzalez, and Peter W. Battaglia. "Learning Mesh-Based Simulation with Graph Networks." Link: https://openreview.net/forum?id=roNqYL0_XP [2] Lino, Mario, Chris Cantwell, Anil A Bharath, and Stathi Fotiadis. "Simulating continuum mechanics with multi-scale graph neural networks.". Link: https://arxiv.org/abs/2106.04900 [3] Cao, Yadi, Menglei Chai, Minchen Li, and Chenfanfu Jiang. "Efficient learning of mesh-based physical simulation with bi-stride multi-scale graph neural network.". Link: https://openreview.net/forum?id=2Mbo7IEtZW Technical Quality: 3 good Clarity: 3 good Questions for Authors: See the last point in **Weaknesses** Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: The method is limited to very simple geometries due to 2 facts: 1. Only relies on the point cloud. A counter-example is porous material where the point cloud can be the same but the tunnel is very different. - In other words, although this method is discretization invariant, it is also geometry-ignorant, which is not desired. 2. Sub-sampling, if the key flow feature is determined by some subtle geometries, which is very common, this method fails again. The limitation also is reflected in the dataset, as the pressure seems to look really smooth for all cases, and all geometries are very simple. I recommend the authors objectively illustrate these facts, even at the beginning (you do not have to emphasize them; mentioning them is enough). More experiments objectively showing the limitation (maybe in the appendix) are also appreciated. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ## Response to Reviewer 5VBN > **Q1:** Insufficient number of datasets and comparisons: More datasets should have been included, such as cylinders and airfoils from [1]. Comparisons with representative methods from the GNN family, such as MeshGraphNet[1], MSGNN-Grid[2], and BSMSGNN[3], should have been discussed or ideally conducted. Among the mentioned papers [1] to [3], a crucial comparison would be with [2], which also utilizes a background grid as a helper but differs in the backbone as it did not use FNO. This would have provided a valuable benchmark for evaluation. The overly ambitious illustrations: such as the claim in the abstract that "...(the method) can be applied to any geometry", which is not the case. A more objective approach in illustrating both the strengths and limitations would be appreciated by readers. **A1:** We thank the reviewer for the suggestions. As suggested by the reviewer, we added the comparison with MeshGraphNet [1]. The GNN model learns the local interaction with flexible meshes. However, it has the challenge of learning the long-range global interaction. On the other hand, the GINO model has the advantages of both the graphs structure and Fourier methods. It has the flexibility of graphs as well as the efficiency of Fourier methods. The details can be found in the general response. We are further experimenting with GraphCast [4] which has a latent graph similar to MSGNN-Grid [2], so far it underperforms the MeshGraphNet model [1]. We will continue to experiment to present a faithful comparison. We are happy to add the reference to MSGNN-Grid and BSMSGNN too. In this work, we aim to study industry-level 3D aerodynamics simulation. We generate the Ahmed-body dataset, which has 7M nodes in the space and 100k nodes on the surface. The 2D datasets such as the cylinders flow and airfoils are interesting and widely studied, but they are much smaller with only a few thousand nodes. The 3D fluid problems are several order more expensive to generate. We have been searching for complex 3D simulations for long, but such public data is still lacking in the community. Therefore we decided to generate a new dataset. If the reviewer is aware of existing 3D simulations with multiple complex shapes, we will be very happy to experiment with them. > **Q2:** Lack of clarity in dataset presentation and results: Fig.2 suggests that the shapes are overly simplistic and the pressure distributions appear uniformly smooth, indicating ease of learning. It is essential for the author to provide a more comprehensive and clear explanation, including typical examples of datasets and variations in geometry among examples. **A2:** Thanks for the comments. We added a few illustrations in Figure 6 in the appendix. We also provide a figure ([3] in the general response) of the collection of shapes below. They are significantly more complex compared to the existing 2D problems such as the channel flow and airfoils, where the airfoils are smooth curves determined by a few parameters. It is unfair to require graphics-level meshes. Even with these relatively simple meshes, the physics is highly complicated as shown in Figure 6. High-fidelity 3D simulations on graphics-level 3d shapes would take up to billion of mesh points to solve with NASA FUN3D solver [1], which is sincerely beyond the scope of this work. [1] Carlson, Jan-Renee, et al. "High-Fidelity Simulations of Human-Scale Mars Lander Descent Trajectories." AIAA AVIATION 2023 Forum. 2023. --- Rebuttal Comment 1.1: Title: Increase my score to 5 after reading rebuttal Comment: Thanks the authors resolved partial of my concerns. I would hence increase my score to 5. I apologize for not being able to edit the original comments (maybe because of passing due).
Summary: The authors address the task of learning to solve large-scale PDEs based on a geometry-informed neural operator. The combination of graph neural operators (GNO) and Fourier neural operators (FNO) allows the exploration of the benefits of being able to handle irregular grids and locality of operations to allow efficiency (due to GNO) as well as capturing global interactions (due to FNO), thereby overcoming the limitations of the individual approaches. In more detail, the surface (e.g. point cloud) is input to a geometry encoder that encodes the irregular grid information on a regular grid structure based on local kernel integration layers through GNO with graph operations. Then the result is concatenated with signed distance features. Then, a sequence of FNO layers is used (i.e. on latent space) for global kernel integration. The respective intermediate result is projected back to the domain of the input geometry. To show the potential of their approach, the authors carry out experiments on a novel own dataset as well as a ShapeNet car dataset, where they report quantitative results regarding training and test errors. Strengths: Technical soundness and novelty: The method seems novel and performs beneficial over some potential alternatives in terms of speed and accuracy, since it allows a significant speed-up in comparison to the OpenFOAM solver and offers more accuracy than GNO, GeoFNO and U-Net. Evaluation: The authors provide quantitative and qualitative results including comparisons to baselines regarding training/test errors. Exposition: The paper is well-structured and mostly readable. Figure/tables and their captions are informative. Weaknesses: Technical soundness and novelty - The discussion of limitations provided by the authors is rather short. Blending out the benefits of physics-informed approaches, that led to extreme speed-ups over OpenFOAM particularly for fluid simulation as well as offer generalization to novel scenes without being limited to object categories, limits the conclusions drawn from the presented work. The relation to these should be better emphasized to clarify what the presented method adds and whether it can be combined with these. - The datasets are described in a very short manner. Especially for the new dataset, it would be relevant to see more details such as a systematic overview on the key aspects. In addition, clarifications on whether the trainings of different approaches converged within the used 100 iterations would be relevant. In Figure 2, the rightmost part is also difficult to interpret given the used color scale. Evaluation: - Reporting training and test errors only gives limited insights on where the errors are better/worse in comparison to previous approaches. E.g., it is not demonstrated that physical phenomena (such as the Magnus effect or Karman vortex streets) are accurately represented. - What are limitations/failure cases that cannot be handled that well with the presented operator? References: Discussing other developments such as Brandstetter et al., CLIFFORD NEURAL LAYERS FOR PDE MODELING -> usage of multivector representations together with Clifford convolutions. The authors show the benefit of Clifford neural layers by replacing convolution and Fourier operations in common neural PDE surrogates by their Clifford counterparts on 2D Navier-Stokes tasks or physics-informed upgrades of U-Net such as e.g. Wandel et al., Teaching the incompressible Navier–Stokes equations to fast neural surrogate models in three dimensions -> an example of physics-informed fluid simulation based on physics-informed U-Net or the splitting of the solver into region-wise optimization such as Balu et al., Distributed Multigrid Neural Solvers on Megavoxel Domain would improve the discussion of the presented approach and its potential in the context of related work, especially since the presented approach involves quite strong assumptions (dependence on training data/shape category) and not leveraging physics-informed models therefore should be discussed. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: Please discuss the comments mentioned under 'Weaknesses'. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: Limitations are shortly discussed, but seem to be severely limiting (category-specific approach, lacking generalization capabilities, unclear relation to speed-ups achieved based on physics-informed approaches). Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ## Response to Reviewer RPer > **Q1:** The discussion of limitations provided by the authors is rather short. Blending out the benefits of physics-informed approaches, that led to extreme speed-ups over OpenFOAM particularly for fluid simulation as well as offer generalization to novel scenes without being limited to object categories, limits the conclusions drawn from the presented work. The relation to these should be better emphasized to clarify what the presented method adds and whether it can be combined with these. **A1:** Thanks for pointing this out. The physics-informed approach is the future direction that may overcome the limitation. We will separate the future work from the limitation. We will add a more detail limitation as the space allows. > **Q2:** The datasets are described in a very short manner. Especially for the new dataset, it would be relevant to see more details such as a systematic overview on the key aspects. **A2:** Thanks for the suggestion. We will add a more systematic description of the dataset. > **Q3:** In addition, clarifications on whether the trainings of different approaches converged within the used 100 iterations would be relevant. **A3:** Most of the models such as FNO and UNet converge around 60 epochs. We are happy to include the training curves. > **Q4:** In Figure 2, the rightmost part is also difficult to interpret given the used color scale. **A4:** In Figure 2, the rightmost part represents the error. We plot the error with the same color bar as the truth and prediction. It shows that the error is near zero almost everywhere except the bumper. We are happy to plot the error with the relative scale in the revision. > **Q5:** Reporting training and test errors only gives limited insights on where the errors are better/worse in comparison to previous approaches. E.g., it is not demonstrated that physical phenomena (such as the Magnus effect or Karman vortex streets) are accurately represented.What are limitations/failure cases that cannot be handled that well with the presented operator? **A5:** It is a good question. Usually, a small L2 error implies the two fields are identical, including the same physical behaviors. Since we are not predicting the velocity field but only the pressure field, we cannot study the Magnus effect or Karman vortex. We instead add a drag coefficient study since the drag can be computed from the pressure and the wall shear stress (we predict both). The drag coefficient is one of the major objectives in aerodynamics design. We will add the worst-case example and analysis. > **Q6:** > References: Discussing other developments such as > - Brandstetter et al., CLIFFORD NEURAL LAYERS FOR PDE MODELING -> usage of multivector representations together with Clifford convolutions. The authors show the benefit of Clifford neural layers by replacing convolution and Fourier operations in common neural PDE surrogates by their Clifford counterparts on 2D Navier-Stokes tasks > > or physics-informed upgrades of U-Net such as e.g. > > - Wandel et al., Teaching the incompressible Navier–Stokes equations to fast neural surrogate models in three dimensions -> an example of physics-informed fluid simulation based on physics-informed U-Net > > or the splitting of the solver into region-wise optimization such as > > - Balu et al., Distributed Multigrid Neural Solvers on Megavoxel Domain > > would improve the discussion of the presented approach and its potential in the context of related work, especially since the presented approach involves quite strong assumptions (dependence on training data/shape category) and not leveraging physics-informed models therefore should be discussed. **A6:** Thank you for giving these references. We will add the discussion in the related work section in the revision. --- Rebuttal Comment 1.1: Title: Follow-up on Rebuttal Comment: I thank the authors for discussing my concerns. The following aspects remain vague: 1) The authors mention to add a more systematic description of the dataset, however, no further details are provided. 2) The authors mention to add a drag coefficient study in the revision, however, no further details are provided. 3) The benefits over the mentioned references remain unclarified. More details on this would clarify the contribution of the submission. --- Reply to Comment 1.1.1: Comment: Thank you to the reviewer for the response. It appears there has been a misunderstanding. The rebuttal does not permit the submission of revisions. Therefore, we have included our new experiments and updates in the general response and the 1-page supplemental pdf. These can be found at the **General response** prompt near the top of the webpage. We will also elaborate on them here: ## 1. The description of the dataset is in Section 3 of the pdf file. We further add the illustrations of the Ahmed-body dataset. The figure on the left shows the velocity field and the pressure field. The velocity field, represented with 7 million nodes, has complex vortexes at the rear of the body. The pressure field, represented with 100 thousand nodes, is steep at the front and also the legs. Such aerodynamic simulations are extremely expensive. Each simulation takes 7-19 hours on 2 Nvidia v100 GPUs with 16 CPU cores. It is extremely costly to generate a 3D dataset with multiple shapes. We continue to generate simulations on new shapes and increase the instances from 500 to 800. Industry-standard Ahmed-body geometries are characterized by six design parameters: length, width, height, ground clearance, slant angle, and fillet radius. Refer to the wiki (https://www.cfd-online.com/Wiki/Ahmed_body) for details on Ahmed body geometry. In addition to these design parameters, we include the inlet velocity to address a wide variation in Reynolds number. We identify the design points using the Latin hypercube sampling scheme for space filling design of experiments and generate around 800 design points. The aerodynamic simulations were performed using the GPU-accelerated OpenFOAM solver for steady-state analysis, applying the SST K-omega turbulence model. These simulations consist of 7.2 million mesh points on average, but we use the surface mesh as the input to training which is roughly around 70-100k mesh nodes. ## 2. The drag coefficient study is in Section 2 of the pdf file and the general response. To compare the performance of our model against the industry-standard OpenFOAM solver, we perform a full cost-accuracy trade-off analysis. The result shows GINO is 26,000x faster at computing the drag coefficients. Figure [1] below shows the cost-accuracy curve, measured in terms of inference time needed for a relative error in the drag coefficient for GINO and OpenFOAM. The cost of GINO is computed as the time, averaged over the test set, needed to predict the drag coefficient by running the model. This time includes both data pre-processing (computing the SDF) as well as the model run-time and the drag calculation given the predicted fields. All models are ran on a single NVIDIA V100 GPU. The cost for OpenFOAM is computed as described in the next paragraph and is averaged over the test set. The solver is ran on two NVIDIA V100 GPUs in parallel. We observe a four to five order of magnitude speed-up when using GINO. At a $3\%$ relative error, we find the speed-up from our model which includes drag in the loss to be $26,000 \times$. As we increase the size of the latent space, the cost of GINO grows, however, we observe a plateau in the drag error. This is common in machine learning models as the error from using finite data starts to dominate the approximation error. Furthermore, we use only the size of the latent space as a hyper-parameter, keeping the number of learnable parameters fixed. It is interesting to explore further how parametrically scaling the model impacts predictive power. During data generation, we keep track of the drag coefficient predicted by OpenFOAM after every iteration. While the coefficient converges with more iterations, this convergence is not monotone and can often appear quite noisy. This makes computing the error from the raw data not possible. We therefore apply a box filter to the raw signal to compute a filtered version of the drag which acts as a smoother. We take as the reference drag, the drag at the last iteration of the filtered signal. To compute the number of iterations it takes for the solver to predict a drag coefficient at a given relative error, we trace back the predictions from the filtered signal and return the first time at which this prediction incurs the given error with respect to the reference drag. An example of this methodology is shown in Figure [2]. The errors for our GINO model are computed with respect to the true drag coefficient from the last iteration of the solver. This is because we take as ground truth the pressure and wall shear stress from this last iteration and train our model to predict them.
Summary: This paper proposes a framework to learn a neural operator for large-scale 3D PDEs. The framework uses a well-implemented Graph Neural Operator (GNO) to transform the irregular grid into a regular grid, so that it enables the powerful Fourier Neural Operator (FNO) to work on irregular input data, such as point clouds, in a discretization invariant way. The main contribution of the paper is to use GNO to transform large-scale irregular grids (point clouds and SDF) into regular grids, to make the FNO suitable. Based on this, GINO realized 100,000x speed-up compared to GPU-based simulators on large-scale stable CFD problems. The model is proven to be working on datasets with a high level of complexity and realism. The paper also generates two large-scale CDF datasets, which require a large amount of time to simulate and generate, which is valuable. Strengths: 1. Clear written about the problem setting and equations and symbols. 2. The main idea is useful but not too complicated to understand. An efficient combination of existing methods. 3. The method has a general potential for various kinds of PDEs. 4. The method has shown great performance in engineering-level experiments, fulfilling its great potential for applications. Weaknesses: Although the experiments have demonstrated the main ability of the model, the experiments are not complete enough to support all claims and novelties. 1. Since the model is a new combination of two existing methods GNO and FNO, the main contribution is the new usage of GNO. Then the main thing required to be demonstrated should be “GNO is more proper and has great encoding and decoding ability between irregular and regular grids”. Regarding this, some different encoding methods should be compared, such as GNN, kNN. 2. Discretization invariance is said to be important, but no direct experiments could support this. Although Table 5 is relevant to this, but the variables are not kept so it’s not a direct support. In other words, a GNN+FNO baseline should be added. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: 1. What is the difference between your proposed GNO and Continuous Convolution (CtsConv) from “Lagrangian Fluid Simulation with Continuous Convolutions”? It seems there is no obvious difference based on equations. And one of the claimed contributions is a well-implemented GNO, which makes it much more efficient, but CtsConv is also well-implemented based on Open3D and hashing table. 2. What is the performance for much finer latent resolutions such as 128^3 and 256^3? Can we keep increasing the test error by increasing the latent resolution? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 2 fair Contribution: 3 good Limitations: 1. As the paper stated, the trained model is limited to a specific observed category of shapes. The training for the operator requires a training dataset of high quality. 2. The proposed framework should be general for various kinds of PDEs, but only stable NS equations are tested. 3. The framework is for 3D PDEs, which could be modified for time-dependent ones. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ## Response to Reviewer rW9d > **Q1:** Although the experiments have demonstrated the main ability of the model, the experiments are not complete enough to support all claims and novelties. **A1:** To further support the results, we add two more experiments that compare GINO with the solver and GNNs (MeshGraphNet). When comparing against the solver, we observe a four to five order of magnitude speed-up when using GINO. At a $3\%$ relative error, we find the speed-up from our model which includes drag in the loss to be $26,000 \times$. When comparing against GNN, we implemented the model following MeshGraphNet, which consists of an encoder, decoder, and processer. The model has a size of around 10M parameters, at a similar cost as other models. As shown in the updated experiments, the proposed model is faster than the solver while more accurate than other ML methods given a similar level of cost. > **Q2:** Since the model is a new combination of two existing methods GNO and FNO, the main contribution is the new usage of GNO. Then the main thing required to be demonstrated should be “GNO is more proper and has great encoding and decoding ability between irregular and regular grids”. Regarding this, some different encoding methods should be compared, such as GNN, kNN. **A2:** We agree that our main contribution is the combination of two existing methods GNO and FNO. Where GNO handles interaction on local irregular meshes and FNO learns global physics on the uniform latent grid. However, our goal is not to compare GNO against GNN or kNN in this paper. The main spirit of GNO is to design a graph structure with ball connectivity instead of nearest-neighbor connectivity, so the message passing is well-defined as the kernel integral on a ball. Indeed, modern graph neural networks as such GraphCast [1] also design the encoder and decoder with ball connectivity, following the definition of GNO. For a more comprehensive study, we do add a new experiment on GNN (MeshGraphNet) as discussed above. > **Q3:** Discretization invariance is said to be important, but no direct experiments could support this. Although Table 5 is relevant to this, the variables are not kept so it’s not a direct support. In other words, a GNN+FNO baseline should be added. **A3:** Discretization invariance means the model can be trained on one mesh discretization and used on another. For example, we have the super-resolution experiment where we train with sub-sampled (lower resolution meshes) and evaluate the model with the full mesh. If we design the encoder or decoder with the nearest-neighbor connectivity, then the neighbors (receptive field) will change along with the sampling rate, which makes it hard for the GNNs to generalize across different resolutions. Regarding the question, could the reviewer clarify which variables are not kept? We kept the same hyperparameters for all sampling rates. > **Q4:** What is the difference between your proposed GNO and Continuous Convolution (CtsConv) from “Lagrangian Fluid Simulation with Continuous Convolutions”? It seems there is no obvious difference based on equations. And one of the claimed contributions is a well-implemented GNO, which makes it much more efficient, but CtsConv is also well-implemented based on Open3D and hashing table. **A4:** Thanks for pointing out the paper. We will be happy to add a reference to it. CtsConv learns a continuous convolution for learning fluid mechanics which is based on linear interpolation. CtsConv is an efficient local convolution layer that competes with GNNs and GNOs. Indeed our GNO implementation is similar to CtsConv. However, we consider 3D industry-level aerodynamic simulations. In the Ahmed-body dataset, the airflow has 7M particles and the surface mesh has 100k particles. As a comparison, the previous work only considers smaller-scale simulations with particles. For the large-scale problem, it's crucial to have an efficient FNO model to capture the global physical interaction, which is our main contribution. > **Q5:** What is the performance for much finer latent resolutions such as 128^3 and 256^3? Can we keep increasing the test error by increasing the latent resolution? **A5:** It's a very exciting direction to use much finer latent resolutions, which will likely further improve the performance as projected based on Table 3. However, 80^3 resolution already saturates 32GB Nvidia V100 GPUs. 128^3 and 256^3 resolution will take 4x and 32x more memory respectively. It's possible to push for higher resolution with the model parallel FNO [2], which we leave as future work. > **Q6:** As the paper stated, the trained model is limited to a specific observed category of shapes. The training for the operator requires a training dataset of high quality. The proposed framework should be general for various kinds of PDEs, but only stable NS equations are tested. The framework is for 3D PDEs, which could be modified for time-dependent ones. **A6:** Thanks for the comments, we would want to emphasize that 3D aerodynamics is a very one of the biggest industry problems. The time-averaged NS (RANS) is still the industry standard in car design. We look forward to exploring time-dependent problems in future work. [1] Lam, Remi, et al. "GraphCast: Learning skillful medium-range global weather forecasting." arXiv preprint arXiv:2212.12794 (2022). [2] Grady, Thomas J., et al. "Model-parallel Fourier neural operators as learned surrogates for large-scale parametric PDEs." Computers & Geosciences (2023): 105402.
Rebuttal 1: Rebuttal: # General response We are grateful to the reviewers for their insightful feedback and constructive comments. It is encouraging to note that part of the reviewers agree with the - scalability of the proposed model to realistic 3D aerodynamic simulations, - acknowledge the significant speed-up offered by our approach, - and recognize our new dataset has been a valuable contribution. However, the primary concerns lie in the comprehensiveness of the experiments. In response to these concerns and support our results, we add **two new experiments** that further compare our model with GNN methods and numerical solver. - we added an experiment comparing GINO's performance with GNN. It shows GINO has a much smaller error rate compared to GNN (8.31% vs 13.88%). - we perform a full cost-accuracy trade-off analysis. The result shows GINO is 26,000x faster at computing the drag coefficients. - and we further expand the Ahmed body dataset from 500 instances to 800, making it the first and largest 3D RANS design dataset available for the community. We sincerely hope these experiments make our study more comprehensive and clarify the concerns of the reviewers. ## 1. Comparison against GNNs We added an experiment comparing GINO's performance with GNN. It shows GINO has a much smaller error rate compared to GNN (8.31% vs 13.88%). We used the MeshGraphNet [1], as suggested by Reviewer 4. This model is a common GNN method for physical simulations. The MeshGraphNet has three main parts: an encoder, a decoder, and a processor. Both the encoder and decoder use the same setup, with a channel size of 256. The processor has 15 layers of information passing, using edge and node blocks. It also has a size of 256. In total, the model has about 10 million parameters. The total number of edges is around 280k, which saturates a single NVIDIA V100 GPU with 32GB of memory. We set the learning rate to be 1e-4 with an exponential decay with a rate of 0.99985 similar to the original setup in [1]. The validation error of GNN for the pressure field is **13.88%**. The training curve is shown in the report. The error of GNN model is much higher than the GINO model (64x64x64 resolution, 100M params) of **8.31%**. The smaller GINO (32x32x32 resolution, 10M params) can achieve 10.10% error rate. The GNN model is good at learning the local interaction with flexible meshes. However, it cannot easily learn the long-range global interactions. On the other hand, the GINO model has the advantages of both the graph and Fourier method. It has the flexibility of graphs as well as the efficiency of Fourier methods for capturing long-range dependencies. [1] Pfaff, Tobias, Meire Fortunato, Alvaro Sanchez-Gonzalez, and Peter W. Battaglia. "Learning Mesh-Based Simulation with Graph Networks." Link: https://openreview.net/forum?id=roNqYL0_XP ## 2. Comparison against solver on Drag coefficients To compare the performance of our model against the industry-standard OpenFOAM solver, we perform a full cost-accuracy trade-off analysis. The result shows GINO is **26,000x** faster at computing the drag coefficients. Figure (2, left) shows the cost-accuracy curve, measured in terms of inference time needed for a relative error in the drag coefficient for GINO and OpenFOAM. The cost of GINO is computed as the time, averaged over the test set, needed to predict the drag coefficient by running the model. This time includes both data pre-processing (computing the SDF) as well as the model run-time and the drag calculation given the predicted fields. All models are run on a single NVIDIA V100 GPU. The cost for OpenFOAM is computed as described in the proceeding paragraph and is averaged over the test set. The solver is run on two NVIDIA V100 GPUs in parallel. We observe a four to five order of magnitude speed-up when using GINO. At a **3%** relative error, we find the speed-up from our model which includes drag in the loss to be 26,000 times. As we increase the size of the latent space, the cost of GINO grows, however, we observe a plateau in the drag error. This is common in machine learning models as the error from using finite data starts to dominate the approximation error. Furthermore, we use only the size of the latent space as a hyper-parameter, keeping the number of learnable parameters fixed. It is interesting to explore further how parametrically scaling the model impacts predictive power. During data generation, we keep track of the drag coefficient predicted by OpenFOAM after every iteration. While the coefficient converges with more iterations, this convergence is not monotone and can often appear quite noisy. This makes computing the error from the raw data not possible. We therefore apply a box filter to the raw signal to compute a filtered version of the drag which acts as a smoother. We take as the reference drag, the drag at the last iteration of the filtered signal. To compute the number of iterations it takes for the solver to predict a drag coefficient at a given relative error, we trace back the predictions from the filtered signal and return the first time at which this prediction incurs the given error with respect to the reference drag. An example of this methodology is shown in Figure (2, right). The errors for our GINO model are computed with respect to the true drag coefficient from the last iteration of the solver. This is because we take as ground truth the pressure and wall shear stress from this last iteration and train our model to predict them. Pdf: /pdf/98e5c29d0ae6bac6a47accbf7e05f73692b0c7c1.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: This paper proposes a geometry-informed neural operator for arbitrary geometry, to facilitate the learning on the solution operator for large-scale 3D CFD simulation. Strengths: The proposed GINO model applies graph-kernel blocks for the encoder and decoder, for processing features in the latent uniform space, with the Fourier blocks running on the latent space to capture the global interaction. The proposed method provides significant runtime speedup. Weaknesses: For the experiment, the authors only perform the evaluation on the car category of ShapeNet dataset. It’ll be more persuasive to have the proposed model evaluate the other categories rather than only a single category. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: As the authors claim the significant speedup provided by the proposed method. Could the authors report the scale of the parameters of the proposed method, and the comparison to the other related methods? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: As pointed out by the authors, this work is constrained to a specific category and limited to CFD with more complex shapes. It'll be with practical significance if the proposed work could tackle these limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ## Response to Reviewer zqvK > **Q1:** For the experiment, the authors only perform the evaluation on the car category of the ShapeNet dataset. It’ll be more persuasive to have the proposed model evaluate the other categories rather than only a single category. **A1:** Thanks to the reviewer for the comment. We want to first point out that we considered two datasets: one is the existing ShapeNet Car data; the other is the Ahmed body dataset we generated. 3D Aerodynamic simulations are extremely expensive. For the example of the Ahmed body dataset, each simulation takes 7-19 hours on 2 Nvidia v100 GPUs with 16 CPU cores. It is extremely costly to generate a 3D dataset with multiple shapes. We have been searching through the literature for a very long but could not find another dataset resembling industry standard settings. If the reviewer would point out an existing dataset, we will be happy to try it. > **Q2:** As the authors claim the significant speedup provided by the proposed method. Could the authors report the scale of the parameters of the proposed method, and the comparison to the other related methods? **A2:** Thanks for the questions. For the larger Ahmed-body dataset, GINO uses O(100M) parameters which take around 5mins to train per each epoch. We match the FNO and UNet baselines to have a similar model size and training time. For smaller the Car-CFD dataset, we compared it against two other baselines: GeoFNO and GNO. The GeoFNO baseline is a 2D surface model. It's smaller and faster (10M parameters and 1 min training time), while its error is higher than the above 3D models. The GNO also has a smaller number of parameters (10M), but it has more edges that saturate the memory on an NVIDIA V100 GPU. To further support the results, we add two more experiments that compare GINO with the solver and GNNs (MeshGraphNet). When comparing against the solver, we observe a four to five order of magnitude speed-up when using GINO. At a $3\%$ relative error, we find the speed-up from our model which includes drag in the loss to be $26,000 \times$. When comparing against GNN, we implemented the model following MeshGraphNet, which consists of an encoder, decoder, and processer. The encoder and decoder have MLPs of channel size 256. The processer is 15 layers of message passing of edge blocks and node blocks. The model has a size of around 10M parameters. The details can be found in the general response. Note that the memory footprint of this model is the same as our 100M parameter model as they both saturate the memory of a single NVIDIA V100 GPU. As a conclusion, the proposed model is faster than the solver while more accurate than other ML methods given a similar level of cost.
null
null
null
null
null
null
Nearest Neighbour with Bandit Feedback
Accept (poster)
Summary: This paper considers contextual bandits in a nearest-neighbor paradigm. In this paradigm, the contexts exist in a metric space, such that contexts that are close in the metric space are also likely to admit the same "correct" action. In other words, the decision boundary of the optimal mapping from context to action is assumed to be small. Intuitively, in such a setting, a nearest-neighbor type strategy is reasonable, in which one decides on an action for the current context based on past history on nearby contexts. First, the paper considers contextual bandits in the adversarial setting, with the added property that at any time step t, one of the contexts for a previous time step is "flagged". Intuitively, this flag corresponds to it being similar to the current context, providing some advice to the algorithm. The paper presents an algorithm for this model, and analyzes its regret in terms of the similarity between the current context and the flagged past context (as measured in terms of the optimal policy's assignment for these contexts). Next, the paper hones in on the case in which the flagged context is a (c-approximate) nearest neighbor of the current context, and simplifies the regret bound to be a function of the parameter c as well as the distance of the input from the decision boundary of the optimal policy. The algorithm presented in the paper seems to be a very intricate construction of search trees over the given action space, to support the given regret bounds as well as efficient calculation. Comments: Section 4.2: I found the definition of the ternary tree to be technical and hard to understand. Perhaps give an intuitive explanation of the construction before delving into notation and formal definitions. Strengths: The assumption that contexts exhibit closeness-based similarity is natural, and allows for meaningful bounds even when the space of possible contexts is infinite. The algorithm itself is interesting and very nontrivial. The paper is generally well-written. Weaknesses: The paper is somewhat pseudocode and notation heavy. I would personally rather have more intuition about the various components and their role in the algorithm, even at the cost of not having a full description of every procedure in the main body of the paper. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: none Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 3 good Contribution: 4 excellent Limitations: yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your review - we have the following comments and responses (to the phrases in quotation marks). "In other words, the decision boundary of the optimal mapping from context to action is assumed to be small" - We note that our comparator policy $y$ can be anything - it does not need to be the exact optimal mapping (which could have a large decision boundary). Choosing a smoother comparator policy (lower $\Phi(y)$) gives a lower regret but choosing a more complex comparator policy (high $\Phi(y)$) may fit the data better (and so have a lower inherent loss). "Section 4.2: I found the definition of the ternary tree to be technical and hard to understand." - We understand it is very complex and we commit to writing an intuitive explanation. "The paper is somewhat pseudocode and notation heavy." - We wanted the main body to be complete in that all novel pseudocode is given. We do have an extra page if accepted so we can provide an “overview” subsection at the beginning of Section 4 where we can describe how the different components will fit together. --- Rebuttal Comment 1.1: Comment: Thank you for your response.
Summary: The paper under review investigates the novel application of nearest neighbor search in the context of contextual bandit problems, which I find both new and intriguing. The main contribution lies in the derived result that bounds the regret using \Phi(y), a metric that quantifies the likelihood of disparate optimal choices among closely related contexts. I appreciate the generality and applicability of this metric, as it encapsulates contextual dependencies and has the potential for broad practical use. I am not familiar with the techniques used in the analysis part, but they look good to me. Strengths: This paper studies the application of nearest neighbor search in contextual bandits, which is new and practical. And it gets some interesting theoretical results. The paper is written very clearly and has a good structure. Weaknesses: Some notations in the second part of section 2 (those related to trees) are not very intuitive and hard to interpret, but perhaps it is not easy to design symbols for them. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Just to make sure that there is no hidden assumption on the relationship between context and loss vector? (compared with linear bandits) Is it the case that if each context has completely different loss vectors, then \phi(y) will be very large and the bound becomes vacuous? Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: n/a Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your review - we have the following comments and responses (to the phrases in quotation marks). "a metric that quantifies the likelihood of disparate optimal choices among closely related contexts." - $y$ can be anything. i.e. $y(x)$ needs not be the optimal choice for $x$ (see below). Typically though we are only interested in policies $y$ where $y(x)$ is often a good choice for $x$. "Some notations in the second part of section 2 (those related to trees) are not very intuitive and hard to interpret…" - we are sorry for the confusing symbols - we have tried to make them fit the definition where possible. "Just to make sure that there is no hidden assumption on the relationship between context and loss vector?" - There is no assumption whatsoever on the relationship between context and loss vector. There is also no restriction whatsoever on our comparator policy $y$ (i.e. we can choose $y$ to be anything - $y(x)$ does not need to be the best action for $x$). Choosing a smoother comparator policy (lower $\Phi(y)$) gives a lower regret but choosing a more complex comparator policy (high $\Phi(y)$) may fit the data better (and so have a lower inherent loss). If the contexts all have completely different loss vectors then yes - the bound will be vacuous (but no algorithm can achieve a non-vacuous bound in this case). --- Rebuttal Comment 1.1: Comment: Thank you for your response. The paper looks good to me. --- Rebuttal 2: Title: Please acknowledge rebuttal Comment: Dear reviewer, The authors have posted a rebuttal. Please acknowledge that you have read it and indicate whether they have adequately addressed your concerns/comments. The author-reviewer discussion phase ends on Aug 21 so please engage with the authors before that if you need any more clarifications. Thanks, AC
Summary: This paper studies the contextual multi-armed bandit problem in the adversarial setting. The authors propose an algorithm, CanProp, which utilizes an adaptive approximate nearest neighbor data structure to select the arm to pull for a given context. Strengths: The authors provide a novel algorithm for the contextual multi-armed bandit problem in the adversarial setting. The algorithm appears to be theoretically sound, and the authors provide regret guarantees, which they specialize to the stochastic case. Weaknesses: 1. Practicality and implementability: The algorithm relies heavily on data structures which do not seem practically feasible. The authors do not provide any empirical results to support their claims that this can be faster than existing methods 2. Notation: the authors use significant notation, some of which may be required, but some of which is unnecessarily difficult to follow. For example, line 75, using $x$ to indicate a random element instead of $X$. This is confusing, as in the condition stated in line 128, the requirement is that the function of $x$, $P[y(X)=a|X=x]$, is Lipschitz in $x$. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: How central is computational efficiency to the novelty claims of this work? If exact nearest neighbors are trivially computed at each time step (resulting in $O(t)$ complexity in round $t$) is the regret guarantee trivial? That is to say, is the result in Appendix B already known and the benefit of this paper is in a computational speed up, or was the regret guarantee itself not previously known? Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your review - we have the following comments and responses (to the phrases in quotation marks). " The algorithm relies heavily on data-structures that do not seem practically feasible" - Could you please elaborate on the phrase "do not appear to be practically feasible"? While these algorithms are complex, they are indeed implementable and exhibit remarkable speed. It's worth mentioning that we have detailed a slower but novel algorithm in Appendix B (referenced in Line 197) that achieves essentially the same level of regret and is straightforward to implement. "The authors do not provide any empirical results to support their claims that this can be faster than existing methods" - The alternative algorithm we present in Appendix B operates with a running time of $\Theta(NK)$ in worst-case scenarios, highlighting the exponential speedup of our main algorithm. Given the mathematical underpinning of this fact, we believe that there exists no imperative necessity to provide empirical validation for this assertion. "Notation: the authors use significant notation..." - We are sorry if the notation is difficult to follow and will attempt to make things clearer. $P[y(x)=a|x]$ is meant to be read as “the probability that $y(x)$ is equal to $a$, given $x$”. "How central is the computational efficiency to the novelty claims of this work?" - As far as we are aware our regret bound itself is completely novel, irrespective of the computational efficiency. Our “initial idea” in Appendix B is indeed novel. We commit to clarifying this. The regret guarantee for exact nearest neighbour is not at all trivial. --- Rebuttal Comment 1.1: Title: Response Comment: Thanks for your point by point response. In light of the novelty of the regret guarantee (independent of computational efficiency) I am increasing my score 5->6, despite the lack of practical validation. 1. Navigating nets, to my understanding, are primarily objects of theoretical interest; large constants prohibit them from yielding practical speedups. Could the authors provide references regarding the practicality of these methods? 2. Implementability: the claims of computational efficiency are significantly highlighted in this work ("extreme efficiency" mentioned 3 times in the first page). However, theoretical claims often hide prohibitively large constants and lower order terms, when Big O notation is used. Thus, numerical simulations are a great way to corroborate theoretical claims. A similar point is true of regret; even seeing toy simulations with the slower algorithm (or with an oracle that provides the exact nearest neighbor) would be good to validate that the hidden constants are not too large. Obviously, NeurIPS makes no requirements that simulations be run. However in the absence of a careful analysis exposing constants and lower order terms, numerical simulations are the only way to demonstrate the practical efficiency of a proposed method. Thus, it seems misleading to refer to these untested methods as "extremely efficient". 3. The novelty of the regret bound on its own is interesting! Some additional discussion highlighting this would be helpful. --- Reply to Comment 1.1.1: Comment: Thank you for increasing your score. Here are the responses to your questions: 1. Yes - for navigating nets the time is exponential in the doubling dimension of the metric space so it is only efficient for relatively low dimensional spaces. There is a large volume of work on the efficient computation of approximate nearest neighbours - especially in Euclidean space. The reason that we chose to focus on navigating nets is that it applies to any metric space and we are sure that it updates in logarithmic time. We note that cover trees also have these properties and it appears that they are more practical. We will have a deeper look into some of the faster algorithms in Euclidean space to find out which have logarithmic-time updates and include such references. We will also try to find references on the practical use of navigating nets and cover trees. 2. For the computational complexity, the $\mathcal{O}$ hides only a constant factor and we do not think it will be that large - the algorithm will be massive speedup (over our slower algorithm) for realistic values of $K$ and $T$. Concerning the regret, the $\tilde{O}$ hides a factor of only $\mathcal{O}(\sqrt{\ln(K)\ln(T)})$. The constant under the $\mathcal{O}$ will be very small. We choose to represent our bound under a $\tilde{O}$ to improve readability rather than to hide a large term. 3. Yes - we can add such discussion.
Summary: This work studies adversarial contextual bandits. The approach to solving this problem considered in this work is to use the nearest neighbor (NN)search sub-routine algorithm, and the regret bound depends on a term that characterizes the efficiency of the NN oracle. The main advantage of using an NN-based algorithm is that the per-trial computation time can be improved exponentially, compared with previous EXP-4 based algorithms. Strengths: The proof is technically sound as far as I know. The improvement of the algorithm running time is significant, if the exponential improvement is provided firstly by this work. The presentation is clear. Weaknesses: Some typos. For example: - line 212, $a_t$ equal to $z_{t, \log K}$-> $v_{t, \log K}$ No experiments. The authors could provide some simulation results to suggest their CBNN approach indeed works in practice. I am particularly interested in the comparison between CBNN and EXP-4, from both the computation time comparison and regret comparison. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: Is it possible to provide some hardness results to show that the per-trial computational time (polylog) is actually optimal? Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: This work is a theoretical work. It does need to address the societal impact issue. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your review - we have the following comments and responses (to the phrases in quotation marks). "the per-trial computation time can be improved exponentially, compared with previous EXP-4 based algorithms." - By Exp4 do you mean our “initial idea” of combining Exp4 and Belief propagation on a dynamic tree (mentioned in Line 197)? We would like to stress that this algorithm is (as far as we are aware) novel so can be viewed as part of our contribution. "if the exponential improvement is provided firstly by this work." - The slower algorithm is also our invention so the exponential improvement is also novel. "No experiments." - The running time of our Exp4 based algorithm is (in the worst case) strictly $\Theta(TK)$ per trial so CBNN is certainly an exponential improvement (we note that the Exp4 based algorithm is in itself part of our contribution). You are right, however, that the two algorithms are not exactly equivalent so there will, empirically, be a slight difference in the regret. If time allows we will endeavour to do some experiments. "Is it possible to provide some hardness results…" - We will think about this question "It does need to address the societal impact issue." - We will address this, but don't foresee any negative societal impact. --- Rebuttal 2: Title: Please acknowledge rebuttal Comment: Dear reviewer, The authors have posted a rebuttal. Please acknowledge that you have read it and indicate whether they have adequately addressed your concerns/comments. The author-reviewer discussion phase ends on Aug 21 so please engage with the authors before that if you need any more clarifications. Thanks, AC
Rebuttal 1: Rebuttal: We thank the reviewers for their time spent in reviewing our paper. We would like to note that for our example problem (that of Theorem 3.4) we can reduce the asymptotic dependance on $T$ and $K$ to $\tilde{O}(T^{d/(d+1)}K^{1/(d+1)})$ by first quantising the contexts (a.k.a. binning) as a pre-processing step. Although it reduces the asymptotic dependence on $T$ and $K$, utilising this pre-processing step can lead to a regret much worse than without using it (due to the other terms in the regret) - hence, we would like to keep Theorem 3.4 whist adding this new result (if we have the reviewers' permission). We will now sketch the proof: Choose some even natural number $q$ which will be tuned later. Let $D$ be the set of all vectors in $[0,1]^d$ such that each component is equal to $z/q$ for some natural number $z$. For each context $x_t$ first map it to the nearest point in $D$. For simplicity here we assume that the decision boundary is the set of vectors in $[0,1]^d$ in which all components except for the first are $1/2$ and the distribution $\mu$ is the uniform distribution (but this proof can be easily extended to capture any possibility). Now, given our policy y, define a new policy $y’$ on $D$ such that: (1) if $x$ is not on the decision boundary then $y’(x)=y(x)$ (2) if $x$ is on the decision boundary then $y’(x)=1$. Note that the expected loss of policy $y’$ is no more than that of $y$ plus $O(T/q)$. By applying Theorem 3.3 to the metric space $D$ with policy $y’$ we see that the expected regret of the algorithm w.r.t policy $y’$ is $\tilde{O}(\sqrt{q^{d-1}KT})$. Hence, the expected regret of the algorithm w.r.t. policy $y$ is $\tilde{O}(\sqrt{q^{d-1}KT}+T/q)$. Setting $q=(T/K)^{1/(d+1)}$ gives us the result. Note that by treating each bin independently we would get an expected regret of $\tilde{O}(\sqrt{q^{d}KT}+T/q)$ meaning that our nearest neighbour methodology saves us a whole dimension. We note that, whilst utilising binning as a pre-processing step can sometimes help, it destroys our more general bounds.
NeurIPS_2023_submissions_huggingface
2,023
Summary: This paper considers non-stochastic contextual bandit problems in which the regret is defined with an arbitrary decision policy that maps contexts to arms. For this problem, a framework of algorithms based on the nearest neighbor rule is developed. The paper provides generic regret bounds for this framework and show regret bounds for semi-stochastic settings in which contexts consist of $d$-dimensional vectors. Strengths: - A general framework applicable to a wide range of contextual bandits is proposed. - The proposed algorithm is superior in terms of computational efficiency due to the use of sophisticated data structures. - The algorithmic procedures and theoretical results are clearly explained. Weaknesses: - Comparisons with existing studies (both in terms of approach and results) are limited. - Technical Quality: 3 good Clarity: 3 good Questions for Authors: - Are there any regret lower bounds that can be compared to the results of this paper? - This paper uses a general result (Theorem 3.2) to construct a bound for one specific example (Theorem 3.4). Theorem 3.2 appears to be so general that I expect to be able to derive nontrivial results for other specific examples examples as well (e.g., when the context is discrete). Can you think of any such examples? Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: I have no concerns about the limitations and potential negative societal impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your review - we have the following comments and responses (to the phrases in quotation marks). "Comparisons with existing studies are limited." - As far as we are aware our work is the first to achieve any of the results given in our paper. In the case of Theorem 3.4 (and the new modification given in the general response) there are (worse or incomparable) results in the literature (see [21] and the citations therein), often limited to the fully-stochastic special case, that we can compare against and we commit to doing so. In the case of the more general theorems we know of no other non-trivial results to compare against. As far as we know this is the first time a nearest-neighbour approach has been applied to adversarial bandits and the first time a 1NN approach has been applied to any bandit problem. We commit to expanding our related work section. "Are there any regret lower bounds that can be compared to the results of this paper?" - Yes - we can show that (when the parameter $\rho$ is tuned correctly) our bound is almost optimal. To prove this first note that the non-contextual bandit problem with $S$ trials and $K$ arms has a regret lower bound of $\Theta(\sqrt{KS})$. Now consider any $\Phi$ and let $T=S\Phi$ for some arbitrary $S\in\mathbb{N}$. Let our sequence of contexts $\{x_1, x_2, … x_T\}$ be such that $x_t$ is the nearest neighbour (seen so far) of $x_{t+1}$. Now divide the sequence into $\Phi$ contiguous segments, all of length $S$, and in any particular segment let each context in that segment have the same associated action. With this knowledge we now have $\Phi$ independent problems each with $S$ trials. Each problem has a regret lower bound of order $\sqrt{KS}=\sqrt{KT/\Phi}$. The total regret must hence be lower bounded by $\Theta(\sqrt{KT\Phi})$ which is a logarithmic factor different from our upper bound. "This paper uses a general result (Theorem 3.2) to construct a bound for one specific example" - Theorem 3.4 is just an example. Our main results are theorems 3.2 and 3.3. These bounds can be applied to any metric space (Theorem 3.3) or anything (Theorem 3.2). Note also that with Theorem 3.3 we don’t need to know the metric space itself - we need only be able to compute the distance between any pair of contexts. This allows our results to be applied to many more applications than Theorem 3.4 allows. For example, the contexts could be machines connected to the internet and the distance could be how many links between two machines - or the contexts could be complex user profiles and the distance given by some algorithm which computes how similar two user profiles are. Also, our main results allow for any sequence of contexts whilst in Theorem 3.4 they must be drawn i.i.d. at random (which is unrealistic in many applications). --- Rebuttal Comment 1.1: Comment: Thank you very much for your kind response. I am satisfied with the responses and have no further questions.
null
null
null
null
null
null
Convolutions Die Hard: Open-Vocabulary Segmentation with Single Frozen Convolutional CLIP
Accept (poster)
Summary: The paper tackles the open-vocabulary panoptic segmentation with a frozen CLIP. Unlike previous two-stages works, the paper uses the same backbone from the frozen CLIP for the mask generator and classifier. The single-stage framework accelerates the training and inferring speed and achieves SOTA performance. Strengths: The single-stage framework proposed in the paper is concise and time-friendly for training and inference. The writing of the paper is clear and can be easily followed. The paper shows the convolutional CLIP has a better generalization ability to higher-resolution inputs than ViTs. FC-CLIP achieves SOTA performance on multiple open-vocabulary panoptic segmentation benchmarks. Weaknesses: 1. The novelty of the paper is weak. The idea of using the frozen CLIP for open-vocabulary prediction is the same as F-vlm. The main difference is that paper uses the framework of F-vlm to tackle the panoptic segmentation problem and substitute the Mask R-CNN heads in F-vlm with kMaX-DeepLab. Regarding innovation, I am not saying that the author must come up with some fancy innovations. But compared with the predecessors' work, what targeted improvements and discoveries have been made to new problems? Compared to F-vlm, I did not see any special design for the panoptic segmentation problem. 2. The claim that (frozen) CNN-based CLIP is better than ViT-based CLIP for dense prediction is not well verified. In the paper, the author verified the claim in Figure 1 with k-means visualization of these two types of CLIP. But whether the smoother feature correlates with the dense prediction performance is unclear. Many works [2][3] use ViT-based CLIP as the backbone of dense prediction tasks. Although the author compares ViT-based CLIP and CNN-based CLIP in Table 5 of Supp, the mask proposals are optimized with the CNN feature (on seen classes). It is more rigorous to use the ground truth mask to evaluate the classification performance of these two types of CLIP. The experiment of training a mask proposal on top of ViT-based CLIP is also needed to evaluate the mask proposal performance of ViT-based CLIP. The results of the 1281 resolution (used in the main paper) are missing in Table 5 of Supp. 3. Eq. 7 might have some typos. 'i' is the index of the mask, and 'i' should be substituted with the index of the class. Eq. 7 might be copied from F-vlm because in F-vlm, 'i' denotes the index of the class, which is correct. The author should check the formula more carefully. [1] Weicheng Kuo, Yin Cui, Xiuye Gu, AJ Piergiovanni, and Anelia Angelova. F-vlm: Open-vocabulary object detection upon frozen vision and language models. ICLR, 2023. [2] Ma, Chao et al. “Open-vocabulary Semantic Segmentation with Frozen Vision-Language Models.” British Machine Vision Conference (2022). [3] Zhou, Ziqi et al. “ZegCLIP: Towards Adapting CLIP for Zero-shot Semantic Segmentation.” ArXiv abs/2212.03588 (2022): n. pag. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: 1. In Open-CLIP, they have three configures of ConvNext-Large: ConvNext-Large (D) @ 256x256 /w augreg on LAION-2B with a top-1 of 75.9% ConvNext-Large (D) @ 320x320 fine-tune of 256x256 weights above for ~2.5B more samples on LAION-2B, top-1 of 76.6% ConvNext-Large (D) @ 320x320 soup of 3 fine-tunes of 256x256 weights above on LAION-2B, top-1 of 76.9% Which one did the author use? The ConvNext-L trained with 320x320 is naturally more suitable for high-resolution inputs than ViT-L/14 trained with 224x224. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: In limitations, the author mentioned "how to deal with conflict or overlapping vocabularies (e.g. cat vs. cat head)". Does this problem occur in the experiment of the paper? What will happen to FC-CLIP if it faces the problem? Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for the constructive feedback and address the concerns below. > ***W1: Novelty, comparison to F-VLM*** Please refer to **C1: Relationship to F-VLM** and **C2: Novelty/Contribution** > ***W2: ViT-based or CNN-based CLIP for dense prediction*** We thank the reviewer for the questions, and address the concerns carefully below. **Many works [2][3] use ViT-based CLIP**: We thank the reviewer for the extra references, which we will add in the revision. We note that those papers [2][3] adopting ViT-based CLIP work *specifically* for semantic segmentation only, which often does not require a very high input resolution as it does not need to predict object-level masks with different scales (COCO dataset particularly contains many small objects). To be concrete, a typical ADE20K semantic segmentation setting is to use $512 \times 512$ input resolution, while for COCO panoptic segmentation, it is usually $800 \times 1333$ or $1281 \times 1281$. The ViT-L/14 (with output stride 14) is sufficient under the relatively lower resolution $512 \times 512$. Additionally, we note that state-of-the-art open-vocabulary semantic segmentation method SAN [24] also observes that ViT performs undesirable under higher resolution and thus they have to adopt a two-stage framework to feed different resolution images to the model. We quote from their paper: *"Accurate semantic segmentation needs high-resolution images, but the released ViT CLIP models are designed for low-resolution images (e.g., $224 \times 224$) and directly apply to high-resolution images giving a poor performance. To alleviate the conflicts in input resolutions, we use low resolution images in the CLIP model and high-resolution images in the side adapter network."* **Table 5 of Supp, the mask proposals are optimized with the CNN feature; mask proposals from ViT-based CLIP**: Regarding the evaluation of directly applying CNN-based and ViT-based CLIP as mask classifier, we use the **ODISE model as mask generator**, which is **not specifically trained for either CNN-based or ViT-based CLIP (but used as a separate module to ensure fairness)**, and thus the mask proposals were not optimized with CNN-based CLIP. Finally, we would like to note that false positive/negative proposals are a common issue in most segmentation models, and thus evaluating the CLIP model with a real mask proposal model (instead of ground-truth masks) is also practically important and reasonable. We will revise the draft to make it clearer. **Results of 1281 resolution are missing in Table 5 of Supp**: We thank the reviewer for the suggestion. We note that either 1280 or 1281 is not divisible by 14, and thus it is not applicable to use ViT-L/14. We provide more results under resolution 224 to 1120 below, where the 1120 is expected to provide an approximation result of 1280. | | | COCO | PQ | @res | | | | ADE20K | PQ | @res | | |------------|:----:|:----:|:----:|:----:|:----:|----|:----:|:------:|:----:|:----:|:----:| | | 224 | 448 | 672 | 896 | 1120 | \| | 224 | 448 | 672 | 896 | 1120 | | ViT-L/14 | 19.3 | 22.5 | 20.6 | 18.5 | 14.9 | \| | 11.9 | 13.7 | 12.6 | 11.6 | 9.1 | | ConvneXt-L | 17.3 | 23.5 | 27.0 | 28.6 | 29.3 | \| | 9.3 | 12.8 | 14.8 | 16.0 | 15.9 | > ***W3: Equation typos*** We sincerely thank the reviewer for pointing out the typos. We will fix them in the revision. Additionally, we would like to note that the geometric ensemble is not a specific design proposed by F-VLM. Instead, it is a common trick employed in many prior works such as [SAN, OVSeg, SimSeg, ViLD, ODISE]. > ***Q1: ConvNeXt-L CLIP backbone comparison*** We thank the reviewer for the question. We use the ConvNeXt-L with "laion2b_s29b_b131k_ft_soup" weight in OpenCLIP. However, we respectfully disagree that ConvNeXt-L enjoys an advantage by pre-training with a higher resolution. In fact, as we work on object-level prediction, the dense feature map is required, instead of a globally pooled vector. Additionally, we note that only the last layer of the CLIP model can be used directly for classification due to the pre-training objective. Consequently, the ConvNeXt-L backbone leads to a ***32x*** downsampled feature map (i.e., $10 \times 10$, if input size is $320 \times 320$), while ViT-L/14 has a much larger feature map (i.e., $16 \times 16$, even if input size is $224 \times 224$), which actually favors ViT-L/14 when using them as dense feature extractors. This explains why ViT-L/14 performs better than CovnNeXt-L when input size is $224 \times 224$ in Table 5 of Supplementary. > ***Limitation: Overlapping vocabularies*** We thank the reviewer for the question. We note that this is a common problem for most existing open-vocabulary segmentation works. It also exists in current benchmarks, e.g., in ADE20K, there are three different but semantically similar classes: *chair*, *armchair*, and *swivel chair*. FC-CLIP simply relies on CLIP to make predictions among the overlapping vocabularies. We agree with the reviewer that how to resolve such problems (e.g., building hierarchical vocabulary space) is an interesting future direction.
Summary: This paper proposes to build a one-stage open-vocabulary detector with a frozen language encoder (CLIP) and acvhieves good performance. Strengths: 1. Strong performance. Performance on many benchmarks is better than previous models. 2. Simple framework. One stage does look more simple to use. Weaknesses: 1. The novelty is limited. The proposed design does not involve enough novelty. 2. From my point of view, one stage and two stage methods do not differ that much. Why one-stage method is much better than previous methods? Technical Quality: 3 good Clarity: 3 good Questions for Authors: Described above. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Described above. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for the constructive feedback and address the concerns below. > ***W1: Novelty*** Please refer to common concerns **C2: Novelty/Contribution** > ***W2: One-stage v.s. two-stage methods*** A one-stage framework can re-use a shared feature extractor across different modules, which not only leads to a simpler framework but also achieves much better accuracy-cost trade-off. Compared to the state-of-the-art two-stage method ODISE, FC-CLIP demonstrates a better performance, while using only 15.4% total parameters and runs 4.4x faster. The proposed one-stage model FC-CLIP significantly simplifies the system design without the need to consider separate modules in the pipeline (e.g., only need to consider one backbone instead of multiple backbone combinations).
Summary: The authors propose an approach based on CLIP model and extend it for zero-shot semantic segmentation. The authors argue that pervious works solve the problem with a two-stage approach, which first generates mask predictions using one backbone and then extract features from another backbone using CLIP model is suboptimal and inefficient. They present an approach using a frozen CLIP image encoder as backbone and generate mask and prediction with trainable pixel and mask decoders. They validate the effectiveness of proposed method on several commonly used semantic segmentation benchmarks and achieve solid performance. Strengths: 1. Extending CLIP model to semantic segmentation in an efficient and effective approach is a recently popular and active research problem in Computer Vision. The authors present an approach to address this problem. 2. The proposed technique is technically sound. 3. The authors conduct experiments comparing to the latest previous work, ODISE, and shows that the performance is comparable while enjoy faster inference speed. Weaknesses: 1. The contribution is incremental. The proposed approach in very similar to OpenSeg [28], with just different selection of backbone, mask generation and classification of in-vocabulary classification. The minor difference in the selections of components is not significant for the top-tier conference. Comparing to [28], it is unclear where the improvement come from. It could be because of the selection of pixel decoder features for in-vocabulary categories, or the different selection of mask generation or the input resolution. There is no ablation to understand this and neither of these changes are significant contributions. 2. The naive single-stage baseline presented in the paper is not reasonable. As CLIP backbone is fine-tuned without considering text encoder, it naturally loses the capability of open-vocabulary classification. The baseline is not reasonable but created in a way to makes the contributions of proposed method appear meaningful 3. The technical details presented in this paper are not self-contained. Details as follow: - The description of naive single-stage baseline is vague. What is the mask generator used in the baseline? Does the classifier use text embedding or something else? - The description of class-agnostic mask generator presented in the paragraph starting at line 192 is vague and lack of details. What is the pixel decoder with axial attention? What is the kMax mask decoders? What is the k-means cross-attention? How is Hungarian matching used? How is the subset of predicted masks selected during the matching process? As mask generation is a critical component in the proposed method, it needs to be properly described even if they are introduced in the previous work to make the paper self-contained and to justify the contribution of the proposed method. - Abuse of notation. In line 136, sum{m_i} <= 1^{HxW} is not properly defined. I am guessing it means every entry in the matrix is smaller or equal to 1. This is a minor issue. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. Please clarify the contributions of proposed methods. 2. Please justify the different between proposed approach and the previous work [28] Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: The limitation is discussed in the paper to some extent and I don't have major concerns on the limitation. Flag For Ethics Review: ['No ethics review needed.'] Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the constructive feedback and address the concerns below. >***W1, Q2: Comparison to OpenSeg*** In the submission, we have already carefully discussed the limitations of naive single-stage methods such as OpenSeg, and how FC-CLIP differs from them. We summarize them again below for reference: 1. As discussed in the Supplementary L19 to L23, we note that methods like OpenSeg jointly fine-tune the whole model, which leads to a worse generalization ability to novel concepts. While in this paper, we have provided in-depth analysis and experiments to validate the importance of adopting a frozen CNN-based CLIP backbone for better open-vocabulary segmentation performance. 2. We respectfully disagree that *”Comparing to [28], it is unclear where the improvement come from. …. There is no ablation to understand this and neither of these changes are significant contributions."* In fact, in the Supplementary, we have compared a naive single-stage baseline which can be considered as a reproduction of OpenSeg (as their code is not open-sourced) in our framework (Supp. Table 1). We quote the results from Supplementary here: | mask generator | in-vocab classifer | out-of-vocab classifier | $PQ$ | $PQ_{seen}$ | $PQ_{unseen}$ | |:--------------:|:------------------:|:-----------------------:|:----:|:-------------:|:---------------:| | trainable | trainable | - | 16.2 | 33.9 | 2.9 | | trainable | - | frozen | 19.4 | 28.5 | 12.6 | | trainable | trainable | trainable | 19.6 | 34.9 | 8.2 | | trainable | trainable | frozen | 23.8 | 36.0 | 14.7 | | frozen | frozen | frozen | 24.5 | 37.4 | 14.9 | As shown in the table above, the first row **(trainable, trainable, -)** can be considered as a reproduction of OpenSeg in our framework. Both our final model (last row) and our "OpenSeg" baseline (1st row) use the same backbone and segmentation frameworks thus involve no "selections of components". Besides, we also disagree that *"improvement is from better selection of modules".* When compared to prior state-of-the-art ODISE, FC-CLIP has a much smaller backbone and simpler design, and can actually have an even better performance when switching to ODISE’s segmentation framework Mask2Former. >***W2: The naive single-stage baseline not reasonable. Fine-tuning text encoder?*** We respectfully disagree with the reviewer. To the best of our knowledge, most open-vocabulary segmentation methods (e.g., SAN, ODISE) do not fine-tune the text encoder. This is natural, as the COCO Panoptic dataset contains only 133 classes, where freezing the text encoder avoids the catastrophic forgetting problem. Our pipeline closely follows them. We do not see that freezing the text branch is improper, considering segmentation datasets only contain limited training vocabularies. To the best of our knowledge, *only* the concurrent work OpenSeeD (ICCV 2023) fine-tunes the text encoder when trained on COCO and Objects365 datasets. However, they achieve 19.7 PQ on ADE20K, which is similar to the performance of our all trainable baseline 19.6 PQ (only trains on COCO). We also note that the single-stage method OpenSeg is not open-sourced. But, we have closely followed it to build our solid single-stage baseline, which is not *"created in a way to make the contributions of the proposed method appear meaningful".* >***W3.1: Mask generator and classifier for naive single-stage baseline*** As shown in the Figure 2 of our paper, we aim to provide a system-wise comparison among different open-vocabulary segmentation methods, involving no specific requirements on Mask Generator or Classifier. To answer this question, we have built a baseline mimicking OpenSeg (as their code/model is not open-sourced) in Table 1 of Supplementary, which uses the same mask generator and classifier as our final model (kMaX-DeepLab segmentation framework and CLIP text embeddings). >***W3.2: Technical details regarding model architecture and training*** We respectfully disagree with the reviewer. The proposed FC-CLIP is a meta architecture, which can be built on top of several segmentation frameworks ***without*** any change (as shown in our response to Q1 of Reviewer N4iz, where we also experiment with Mask2Former). We do not provide any further details of the adopted kMaX-DeepLb segmentation framework, as they are not the main focus of this work (and we did not make any change to them). Providing additional detailed descriptions of them obscures the focus of this work: a general open-vocabulary segmentation pipeline with a ***frozen convolutional CLIP***. We believe that the provided descriptions and references in the draft are already sufficient for the readers to get the context. Additionally, we promise to fully open-source our training/testing codes, allowing the community to check every detail. If the reviewer still disagrees with us, we would like to get the second opinion from either the other reviewers or the area chair. If they also think so, we are more than happy to add more details for those segmentation frameworks in the main paper. >***W3.3: Notation*** The notation appropriately explains the property of panoptic segmentation, where the predicted masks do not overlap with each other. This notation is standard for panoptic segmentation and is aligned to the panoptic segmentation works [40, 72]. That being said, we thank reviewer for suggestion, and we will make it clear. >***Q1: Contributions of proposed methods.*** Please refer to common concerns **C2: Novelty/Contribution**
Summary: This work proposes a new approach to open-vocabulary panoptic segmentation that unifies the mask generator and CLIP classifier into a single-stage framework. This is achieved by sharing the feature extractor between them, which presents two challenges: disrupting the alignment between image and text features during fine-tuning and the need for higher resolution inputs for dense prediction tasks. The authors address these challenges by adopting the shared Frozen CNN-based CLIP backbone. The resulting FC-CLIP model achieves state-of-the-art performance on several benchmarks while being more efficient and effective than previous methods. Strengths: 1. The paper's approach to unifying the mask generator and CLIP classifier into a single-stage framework is a novel contribution to the field of open-vocabulary panoptic segmentation. 2. The paper is well-written and clearly presents the problem of open-vocabulary panoptic segmentation, the challenges of the current two-stage pipeline, and the proposed FC-CLIP model. 3. The paper is well-organized and clearly presents the problem, methodology, and results. The authors provide a detailed explanation of the FC-CLIP model, making it easy to understand the proposed approach. The paper's figures and tables are well-designed and provide a clear visualization of the results. Weaknesses: 1. Absence of Ablation Study: The paper lacks an ablation study to conduct a thorough analysis of the impact of different components or design choices of the FC-CLIP model on its performance. For instance, the inclusion of an in/out-vocabulary classifier and the combine strategy, if adopted within the naive single-stage framework, what would be its effect on performance? 2. The experimental results are not convincing enough. The FC-CLIP model has a higher number of trainable parameters compared to ODISE. However, it remains uncertain whether the observed performance improvement can be attributed solely to the increased parameter count. (2) Table 2 highlights significant improvements achieved by the FC-CLIP model on the Cityscapes dataset. It would be beneficial to provide additional explanations or discussions to elucidate the reasons behind these improvements. (3) While FC-CLIP demonstrates noticeable enhancements on the ADE20K dataset, the improvements on the COCO dataset appear to be comparatively smaller. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: The proposed method is well-motivated and novel. However, some key experiments are lack. More details please refer to the weakness part. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: The authors have briefly discussed the limitations and future work of their work. However, there are some limitations required to discussed. The FC-CLIP model relies on a CLIP model pre-trained on the Internet data that may be biased, which calls for future research for calibration to avoid misuse. They could provide a more detailed discussion of these issues. For example, they could discuss the potential biases in the pre-trained CLIP model and how these biases could impact the performance of the FC-CLIP model. They could also discuss the potential negative societal impact of the FC-CLIP model, such as its use in surveillance systems or other applications that could infringe on privacy rights. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for the constructive feedback and address the concerns below. >***W1: Missing ablation study on different components/design choices*** We have **already provided the asked ablation studies** on model designs in the Table 1 and Table 4 of Supplementary, e.g., combining in/out-of-vocabulary classifiers, as well as the effect of them in the naive single-stage framework. To clarify the confusion, we quote the results from supplementary here again: **1. Combining in-/out-of-vocabulary classifiers and hyper-parameters:** | ensemble (alpha, beta) | arithmetic ensemble | geometric ensemble | |:------------------------:|:-------------------:|:------------------:| | (0.0, 0.0) | 17.6 | 17.6 | | (1.0, 1.0) | 19.4 | 19.4 | | (0.0, 1.0) | 22.8 | 22.8 | | (1.0, 0.0) | 15.7 | 15.7 | | (0.5, 0.5) | 23.2 | 23.4 | | (0.4, 0.6) | 23.1 | 23.3 | | (0.4, 0.7) | 23.7 | 24.2 | | (0.4, 0.8) | 24.0 | 24.5 | | (0.4, 0.9) | 23.9 | 24.1 | **2. Effect of each module:** | mask generator | in-vocab classifer | out-of-vocab classifier | $PQ$ | $PQ_{seen}$ | $PQ_{unseen}$ | |:--------------:|:------------------:|:-----------------------:|:----:|:-------------:|:---------------:| | trainable | trainable | - | 16.2 | 33.9 | 2.9 | | trainable | - | frozen | 19.4 | 28.5 | 12.6 | | trainable | trainable | trainable | 19.6 | 34.9 | 8.2 | | trainable | trainable | frozen | 23.8 | 36.0 | 14.7 | | frozen | frozen | frozen | 24.5 | 37.4 | 14.9 | For the naive single-stage models with in-/out-of-vocabulary classifiers, please refer to the third row **(trainable, trainable, trainable)**, which leads to 19.6 PQ with a performance degradation by -4.9 PQ from our final setting in the last row ($PQ_{unseen}$ degrades most by -6.7 PQ). The 4th row **(trainable, trainable, frozen)** provides the result by employing another frozen off-the-shelf CLIP as out-of-vocabulary classifier and leads to a comparable PQ score 23.8 (-0.7 PQ), but we note that this actually leads to the same model costs as two-stage models, with the need of additional weights and extra forwarding pass. We will add additional elaboration to the results to avoid confusion. >***W2.1: Improvement comes from more trainable parameters?*** We note that ODISE has a great advantage over our method with *1294M* more frozen parameters. We only have *6M* more trainable parameters, which is mainly due to the differences between kMaX-DeepLab and Mask2Former. We think that the 7.5 times more frozen parameters should have more impacts on performance compared to small additional 6M trainable parameters. That being said, to further address the question, we also build FC-CLIP on top of Mask2Former, which not only has fewer (-7M) trainable parameters (note that ODISE has another trainable MLP for diffusion model's text embeddings), but also has a better performance than ODISE, indicating the improvement comes from our simple and effective meta architecture design. | | frozen params (M) | trainable params (M) | ADE20K | Mapillary Vistas | Cityscapes | |---------------------|:-----------------:|:--------------------:|:---------------------:|:----------------:|:----------:| | ODISE | 1494 | 28 | 22.6 / 23.4 (caption) | 14.2 | 23.9 | | FC-CLIP-kMaX | 200 | 34 | 24.5 | 17.0 | 43.0 | | FC-CLIP-Mask2Former | 200 | 21 | 26.8 | 18.2 | 44.0 | >***W2.2: Improvement on Cityscapes*** FC-CLIP has a better improvement on the street-view datasets such as Cityscapes and Mapillary Vistas than ODISE. We think that it is because ODISE relies on latent diffusion models for feature extraction, which performs a VQ tokenization on the input image. The VQ tokenizer was trained for image generation purposes and thus may not handle complex street view very well. On the contrary, FC-CLIP has a simpler and more effective design, which generalizes well to such datasets. >***W2.3: Improvement on COCO*** In Table 1 of the main paper, the COCO dataset is used as the only training dataset for both ODISE and FC-CLIP. Therefore, its results only indicate the closed-vocabulary performance and thus closely related to the model's capacity to fit the training dataset instead of its ability to novel datasets in the open-vocabulary scenario. In this case, ODISE has a much larger model size and thus is expected to fit COCO better than FC-CLIP (as shown in the paper L281 to L284, we also observe that ODISE can provide better mask proposals, thanks to its much larger model size). However, FC-CLIP generalizes much better to other datasets in a zero-shot manner. >***CLIP biases and limitations*** We thank the reviewer for bringing up the bias and limitation in the pre-trained CLIP model, which may also impact FC-CLIP, we will add related limitation discussion in a revision per suggested.
Rebuttal 1: Rebuttal: We appreciate all reviewers for their valuable suggestions, and we address the common concerns as follows. For the remaining concerns, please see the individual post for each reviewer. >***C1: From reviewers N4iz W1, VW2t W1, Relationship to F-VLM*** We thank the reviewers for the suggestion. We note that F-VLM is a pioneering work that builds an open-vocabulary detection framework on top of a frozen CLIP backbone. However, FC-CLIP differs from it with a totally different observation and motivation, as detailed below. Our work was initially motivated by the state-of-art open-vocabulary segmentation model ODISE, which found that the CLIP backbone extracts noisier features than diffusion models (see Figure B. 1. in ODISE paper), leading to inferior segmentation results (which justifies their adoption of diffusion models). Their observation motivated us to look deeply into the problem. Interestingly, our discoveries show that both ViT-based (used by ODISE) and CNN-based CLIP can produce semantic-meaningful features. However, when scaling up the input resolutions, we discover that ViT-based CLIP features turn noisier, while CNN-based ones are smoother and generalize better across input sizes. Concurrently, F-VLM also empirically found that a frozen CLIP can provide meaningful features for object detection; however, they did not choose CNN-based CLIP on purpose and thus did not compare carefully between ViT-based and CNN-based CLIP backbones. On the other hand, in our paper, we have provided careful ablation studies on ViT-based and CNN-based CLIP in the Table 5 of Supplementary, where we observe that even though both ViT-based and CNN-based CLIP initially have comparable performance at resolution 224, CNN-based CLIP shows better and more robust performance when input resolution scales up. These important studies are missing in F-VLM. Finally, even though we would like to provide more technical implementation differences from F-VLM, we note that in their github repository, they only released a demo and testing code (we note that their inference code was just open-sourced on **8/7/2023**, and their best backbone R50x64 is not available), which makes it hard to provide an in-depth comparison and also prevents the community from reproducing their results (please see F-VLM's OpenReview public comment, where people reported failing to reproduce the results). On the contrary, we promise to fully release all the training/testing codes of FC-CLIP (and best backbone) to facilitate the research in this area. >***C2: From reviewers YCke Q1, VW2t W1, 5KPy W1, Novelty/Contribution*** To be concrete, we summarize our contributions/novelties as follows: 1. To the best of our knowledge, FC-CLIP is the first work that provides an in-depth analysis on adapting different types of CLIP model for downstream open-vocabulary segmentation tasks that require higher resolution inputs, while prior works (e.g., MaskCLIP, ODISE) simply favor a frozen CLIP model or Diffusion Model without looking into the difference between CNN-based and ViT-based CLIP models. 2. We look into the problem of extending the CLIP model for open-vocabulary segmentation, and identify the important problem of resolution discrepancy between the pre-training stage (image-text contrastive learning) and fine-tuning stage (open-vocabulary segmentation), and propose a simple and effective solution by adopting a frozen CNN-based CLIP model. 3. FC-CLIP is a simple and effective meta architecture that can be easily adopted on top of different segmentation frameworks. It not only achieves significantly better performance compared to prior state-of-the-art methods but also enjoys a much lower training/testing cost.
NeurIPS_2023_submissions_huggingface
2,023
Summary: In this submission, the authors proposed a new method for open-vocabulary panoptic segmentation. In the open-vocabulary segmentation setting, the model is trained on seen category annotations and tested on unseen categories. The frozen features of CLIP/ALIGN have been demonstrated to be effective in new category generalization. To leverage the representation of CLIP, prior works typically use the two-stage framework, one stage forwards high-resolution images for mask generation with vanilla networks, and the other stage inputs low-resolution images for mask classification. The proposed a shared Frozen Convolutional CLIP backbone (FC-CLIP) which unifies the pipelines into a single stage. It exploits frozen CLIP with ConvNeXt image encoder as the backbone network. FC-CLIP is simple and yet effective, surpass prior state-of-the-art on many open-vocabulary segmentation tasks. Strengths: 1. This submission explains the motivation and method very well. I like Figure 2 very much, which clearly shows the differences between FC-CLIP and prior works. 2. The proposed FC-CLIP simplifies the multiple forwards of the image encoder into just one forward pass, which saves the computation cost. 3. The proposed model outperforms many prior works and more quantitative results are also provided in the supplementary material. Weaknesses: 1. In the related work, I would suggest authors discuss more about the relationship with F-VLM, which also uses the frozen Convolution CLIP as the shared image encoder. What are the major differences between the designs of F-VLM and FC-CLIP? 2. The ablation study on CLIP model type is missing. In Figure 1, the authors demonstrate that CNN-based CLIP should get better features than ViT-based CLIP. I truly appreciate that the authors did compare ViT-L/14 with ConvNeXt-L in Table 5 of the supplementary material. I would suggest authors also compares other CNN-based CLIP like R50x4, R50x16, R50x64 used in F-VLM. 3. Table 5 in supplementary material shows a very interesting experiment, which shows increasing resolution from 224->448, ViT-L still outperforms ConvNeXt-L. But when increasing resolution to 672, ViT-L/14 accuracy drops significantly, which is slightly counter intuitive. Is this PQ evaluated with geometric mean or not? And I would also recommend evaluating zero-shot ImageNet classification accuracy at different resolution inputs, which could better justify whether CNN-based CLIP outperforms ViT-based CLIP when changing resolutions. 4. Authors state "training model solely with ViT-based CLIP is infeasible", probability due to the GPU memory constraint. To compare more fairly with CNN-based CLIP, I would suggest authors use a sliding window to extract features for ViT-based CLIP, e.g. sliding a 224x224 or 336x336 window over 1024x1024 input image to extract the ViT-based CLIP features, which should still have the capability of zero-shot classification. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: 1. I noted authors used a different resolution setting, long side 1281. Is there any ablation on this design compared to Mask2Former 1024 short-side resize used in other works? Besides, regarding the inference time comparison, I would suggest authors use the same input size for all the models. 2. In Table 1 of supplementary material, I think the last row should be swapped with the second last row. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for the constructive feedback and address the concerns below. >***W1: Relationship with F-VLM*** Please refer to **C1: Relationship to F-VLM** >***W2: Other CNN-based CLIP*** We thank the reviewer for the valuable suggestion. We provide results with different CNN backbones below: | CLIP | R50 (38.3M) | R101 (56.3M) | R50x4 (87.1M) | R50x16 (167.3M) | ConvNeXt-L (199.8M) | |-----------|:-----------:|:-----------:|:------------:|:---------------:|:-------------------:| | ADE20K PQ | 15.5 | 17.1 | 18.5 | 20.4 | 24.5 | As shown in the table above, the performance increases as the model size increases. Using ConvNeXt-L backbone in our setting achieves the best performance, echoing that ConvNeXt is a more modern design of CNN. We note that prior arts use a very strong backbone and CLIP model (e.g., StableDiffusion UNet + ViT-L/14 as in ODISE), which makes it hard to compare against them with much smaller models. Therefore, we adopt ConvNeXt-L in the end (note that ConvNeXt-L has 199.8M parameters, while ViT-L has 304.3M). >***W3: Table 5 in Supplementary*** We thank the reviewer for the question and suggestion. We address each concern below. **Evaluation setting**: We herein provide more details regarding the mask classification evaluation. The experiments are our early exploration to verify the effect of using CNN-based and ViT-based CLIP as mask classifiers at different input resolutions, with ODISE’s codebase. Specifically, we adopt ODISE’s mask proposals (thus the mask proposals are kept the same across settings, and ***not trained*** for any specific CLIP model), but replace the mask classification results using either a LAION-2B pretrained ViT-L/14 or ConvNeXt-L CLIP backbone. For ViT-L/14, the classification pipeline follows MaskCLIP [24] and ODISE with attention masking. For ConvNeXt-L, we employ the simple mask pooling to obtain classification logits for each mask. For simplicity and a fair comparison, there is ***no*** geometric mean used in this ablation study. **Zero-shot ImageNet accuracy**: We appreciate the suggestion and perform a similar experiment on ImageNet in the following table. | IN1k Acc | 224 | 336 | 448 | 560 | 672 | 784 | 896 | |------------|:----:|:----:|:----:|:----:|:----:|:----:|:----:| | ViT-L/14 | 75.3 | 74.3 | 71.3 | 67.5 | 63.1 | 58.5 | 53.9 | | ConvNeXt-L | 75.1 | 77.1 | 76.8 | 74.2 | 69.8 | 65.6 | 58.4 | We observe the same trend (as in segmentation) that while both ViT-based and CNN-based CLIP can yield a strong zero-shot performance at smaller resolutions, CNN-based CLIP generalizes better when scaling up input resolutions. We would like to emphasize that the scenario of applying the CLIP model for image classification on ImageNet and mask classification on COCO is slightly different. ImageNet contains object-centric images, while COCO has objects with diverse scales. As a result, scaling up input resolution from 224 to 448 improves the recognition accuracy (as small objects are more visible) for **both** ViT-based and CNN-based CLIP models on COCO (see Table 5 in Supplementary), but not on ImageNet. >***W4: ViT-based CLIP backbone in a sliding window manner*** We thank the reviewer for the question and suggestion. We note that using ViT-based CLIP backbone presents several technical challenges, including, but not limited to: GPU memory, absence of multi-scale features, resolution discrepancy between upstream pretraining and downstream segmentation, etc. Even with the suggested sliding window method, a careful re-design is still needed to handle problems such as overlapping pixels, ensemble class tokens at each window, a side-adapter to generate multi-scale features, etc. As noted in the paper, we think a CNN-based CLIP is a straightforward and effective solution to these problems, allowing us to build a simple, strong, and effective single-stage open-vocabulary segmentation framework. That being said, we agree with the reviewer that adapting ViT models is an interesting and promising research problem, especially considering that ViT usually demonstrates a better property of model scaling up. >***Q1: Resolution difference*** We thank the reviewer for the question. We note that FC-CLIP is a meta architecture that can build upon several different segmentation frameworks. In the submission, we follow kMaX-DeepLab to resize longer edge to 1281 and pad shorter edge to 1281, which gives a similar effective size (if only considering non-padded pixels) as Mask2Former which resizes shorter edge to 800 and longer side to 1333. To further clarify the resolution issue, we also provide results of building FC-CLIP upon Mask2Former in the following table. Note that ODISE is also built upon Mask2Former and thus FC-CLIP-Mask2Former further provides a comprehensive comparison with ODISE. | | frozen params (M) | trainable params (M) | ADE20K | Mapillary Vistas | Cityscapes | |---------------------|:-----------------:|:--------------------:|:---------------------:|:----------------:|:----------:| | ODISE | 1494 | 28 | 22.6 / 23.4 (caption) | 14.2 | 23.9 | | FC-CLIP-kMaX | 200 | 34 | 24.5 | 17.0 | 43.0 | | FC-CLIP-Mask2Former | 200 | 21 | 26.8 | 18.2 | 44.0 | As shown in the table above, we observe FC-CLIP even achieves better performances, when we switch to the same segmentation framework as ODISE (i.e., use Mask2Former). >***Q2: Swap rows in Table 1 of sup*** Thanks for the suggestion. We will look into it in the revision. --- Rebuttal Comment 1.1: Title: Response to author Comment: Thank the authors for the detailed rebuttal. I like it a lot. I am glad to see Mask2Former further improve the performance of FC-CLIP with less trainable parameters. They are both great frameworks but with some different implementation details. I also truly appreciate authors' effort in open-souce and reproducibility. I would like to raise my rating from Weak Accept to Accept. And please do include more discussions with F-VLM for the general audience. --- Reply to Comment 1.1.1: Comment: Thanks a lot for reading our response and providing the valuable feedbacks! We will incorporate your valuable suggestions into our next revision.
null
null
null
null
null
null
Expanding Small-Scale Datasets with Guided Imagination
Accept (poster)
Summary: The paper proposes an image generation framework for expanding small-scale datasets. The proposed Guided Imagination Framework (GIF) leverages large language-vision models (i.e., CLIP) and generative models (i.e., DALL-E2, Stable Diffusion, and MAE) based on two criteria that help to generate informative new images with (i) class-consistent semantics but (ii) higher content diversity. Experimental analyses and ablation studies on several natural and medical image datasets show that GIF is effective and efficient in boosting the accuracy and generalization of the models trained on artificially expanded datasets. Strengths: ### Significance Combining large language-and-vision models with generative models such as stable diffusion to generate synthetic images has become a popular topic that can support various downstream tasks. To this end, the paper has the potential to attract wide attention. ### Originality The proposed guided imagination framework for dataset expansion combines CLIP with several SOTA generative models in a unique way to optimize the variation over the latent space so that they can generate images with more diverse content but still within the same semantic concept/category. ### Clarity The two core criteria of class-maintained informativeness boosting and sample diversity promotion are defined and illustrated with clear examples. The paper is written well in general although there is just too much material that could not be fit into the main paper and hence moved to supplementary material (but this leaves holes in the paper). ### Quality The experimental evaluations and ablation studies are quite extensive and informative. Main findings are summarized adequately in the main text and detailed discussion is presented in the supplementary material due to page constraints. The contribution of the proposed framework is supported with quantitative evidence from various perspectives. Weaknesses: ### Experiments Some of the important technical and algorithmic details are not included in the paper, which leaves the reader a bit hanging in the air and pushes them to read the supplementary material (which becomes rather like a "mandatory material"). Not sure if there is an easy way to fix this but it would be good to re-organize the material to remedy this concern. How are the baseline DALL-E2, SD, and MAE implemented? Are they configured the same way as the proposed methods but without the actual GIF components (i.e., ablated)? While fine-tuning CLIP, how much effort was put into to its proper optimization, for example, was something similar to robust fine-tuning of CLIP as presented in [A] followed? Also it would be interesting to see how much more is there to be gained if GIF is applied to large-scale datasets? ### Literature Review & Citations Literature review is too brief. It would be good to squeeze in a bit more of the other relevant text-driven dataset generation papers (e.g., [33, 44, 48, 69, B-D]). At the end of the day, GIF-SD is also driven by text prompts. To this end, some of the redundancy across sections (e.g., Sections 1, 3, and 4) with repeated statements can be eliminated. The references contain papers that are not cited in the main paper. Actually, only 43 (see the list below) of the 98 papers are referenced in the main paper and the others are probably mentioned (didn't fully check) in the supplementary material. The bibliography must include only the references that are cited in the main paper. Supplementary material can have its own references. Citations that appear in the main paper: 2, 6, 8, 9, 10, 11, 12, 14, 15, 20, 22, 23, 24, 29, 30, 35, 36, 37, 45, 46, 47, 49, 50, 52, 55, 59, 60, 61, 62, 63, 66, 68, 72, 73, 74, 76, 77, 81, 82, 86, 89, 93, 97 There are some additional recent work that might be good to include in the paper or in the supplementary [B-E]. ### Minor Concerns 76: Better to include references to Cutout, GridMask, RandAugment, Cars and DTD here because they are mentioned for the first time in the paper. 208: s' should be f' 328: conducts <== conduct 329: There is no Figure 10(e) in the main paper. ### Suggested References [A] Wortsman, Mitchell, et al. "Robust fine-tuning of zero-shot models." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022. [B] Sarıyıldız, Mert Bülent, et al. "Fake it till you make it: Learning transferable representations from synthetic ImageNet clones." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2023. [C] Zhou, Yongchao, Hshmat Sahak, and Jimmy Ba. "Training on Thin Air: Improve Image Classification with Generated Data." arXiv preprint arXiv:2305.15316 (2023). [D] Tian, Yonglong, et al. "StableRep: Synthetic Images from Text-to-Image Models Make Strong Visual Representation Learners." arXiv preprint arXiv:2306.00984 (2023). [E] Azizi, Shekoofeh, et al. "Synthetic data from diffusion models improves imagenet classification." arXiv preprint arXiv:2304.08466 (2023). Technical Quality: 3 good Clarity: 3 good Questions for Authors: - How are the prior generative models (DALL-E2, SD, and MAE) trained as baselines? Are they configured exactly the same way as the proposed methods but simply without the GIF components? - While fine-tuning CLIP, how much effort was put into to its proper optimization, for example, was something similar to robust fine-tuning of CLIP as presented in [A] followed? - How much more is there to be gained if GIF is applied to large-scale datasets? Not asking for more experiments here but looking for some intuition/speculation/expert opinion on this. - Would it be possible to expand the literature review with a few more relevant papers on text-driven dataset generation? - Would it be possible to reorganize the paper to include some of the technical details about the method/algorithm in the main paper? - References must be updated to include only those cited in the main paper. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: There is no discussion on limitations and broader impact of the work. Error analysis is a bit superficial. A more detailed error analysis would be helpful. Is there a risk of mode collapse? Is there a risk of exacerbating biases while generating images? Cutting down the required human effort and cost of data collection can be highlighted as broader impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks a lot for the highly constructive comments. We are glad to see that the task significance and the method originality are appreciated. We answer all questions point by point as follows. --- >**Q1. Would it be possible to put more technical details in the main paper?** We highly appreciate the constructive suggestion on the organization of technical details. Due to page constraints, some method details were placed in the supplementary material (cf. Appendix D). However, per the NeurIPS guidelines (https://nips.cc/Conferences/2023/CallForPapers), we are allowed an additional page in the camera-ready version if accepted. Given this opportunity, we plan to incorporate more essential method details into the main text to enhance its clarity and accessibility. We hope this adjustment can address the concern. --- >**Q2. Are the baselines (DALL-E2, SD, and MAE) configured the same way as the proposed methods but without the actual GIF components?** Yes, these baselines were configured identically to their GIF counterparts for ablated evaluation, with the only exception being the absence of the guided imagination optimization (cf. Section 4). Without the crucial GIF components, these baselines cannot ensure that the synthesized data bring sufficient new information, content diversity, and accurate labels for small-scale dataset expansion. This is evident in Table 1 (cf. Ssction 5.1), where their performance is markedly inferior compared to our GIF methods. --- >**Q3. The implementation of CLIP fine-tuning** In this work, we did not use any sophisticated fine-tuning strategies like the robust fine-tuning of CLIP as presented in [A]. Instead, we kept it straightforward and used cross-entropy for standard fine-tuning. It is worth noting that this cross-entropy training is consistent with other training-from-scratch baselines of dataset expansion methods. By doing so, we ensure a fair comparison across different methods and baselines. --- >**Q4. Can GIF be applied to large-scale datasets?** Thanks for raising this question. In fact, we have discussed this in Appendix F.8, where our GIF method is adaptable to expanding larger-scale datasets, e.g., the original CIFAR dataset. The results in Table 23 (cf. Appendix F.8) further demonstrate the effectiveness of our approach. --- >**Q5. Literature review is too brief. Please review more text-driven dataset generation methods and the studies [B-E]** Thank you for the suggestions. Due to page limitations, we put a detailed review of related work in Appendix A. Should our work be accepted, the extra page granted for the camera-ready will allow us to expand the literature review in the main text. We will then incorporate a more comprehensive discussion on text-driven dataset generation papers and include the related studies [B-E] mentioned by the reviewer. --- >**Q6. References must be updated to include only those cited in the main paper** Thanks for pointing this out. We will revise the reference in the main paper to include only the ones cited, and ensure the supplementary has its own reference list. --- >**Q7. Discussion on limitations and broader impact** In light of the suggestion, we will enrich our supplementary with the following discussions: **Limitation**: - **(1) Performance of generated samples**: The expanded samples are still less informative than real samples. For example, a ResNet-50 trained from scratch on our 5x-expanded CIFAR100-Subset achieves an accuracy of 61.1%, which lags behind the 71.0% accuracy on the original CIFAR100. This gap signals the potential for advancing algorithmic dataset expansion. We expect that this pioneering work can inspire more studies to explore dataset expansion so that it can even outperform a human-collected dataset of the same size. - **(2) Quality of generated samples**: Some samples might have noise, as exemplified in Figure 5b. Despite seeming less realistic, those samples are created following our guidance (e.g., class-maintained informativeness boosting). This ensures the class consistency of these samples, mitigating potential negative effects on model training. Nonetheless, refining the expansion method to address these noisy cases can further enhance the effectiveness of dataset expansion. - **(3) Scope of work**. Our current focus is predominantly on image classification. Exploring the adaptability of our method to other tasks, such as object detection, is an intriguing next step. **Broader impact**. Our method can offer a notable reduction in the time and cost associated with manual data collection and annotation for dataset expansion, as discussed in Section 5.2 (cf. Lines 343-350). This can revolutionize how small datasets are expanded, making deep learning more accessible to scenarios with limited data availability (cf. Table 1). --- >**Q8. Other minor issues** Thanks for noting these details. We appreciate the meticulous review. We will make the necessary corrections following the suggestions. --- We deeply appreciate the insightful feedback from the reviewer. Thanks to these constructive suggestions, our paper's quality has been greatly elevated to meet the conference's high standards. We kindly ask the reviewer to take into account the refinements and enhancements we have incorporated. The reviewer's endorsement is influential and could potentially influence other reviewers in recognizing our paper's merits. --- Rebuttal Comment 1.1: Title: Response to the rebuttal Comment: Thanks for the detailed rebuttal! I have noted your responses to my concerns and do not have any further queries. --- Reply to Comment 1.1.1: Comment: Thanks again for your insightful and meticulous review. We genuinely appreciate your continued engagement with our work. With the invaluable feedback, our manuscript has seen significant improvement. We hope that your understanding of our method's value might convey the significance of our contributions more clearly to the entire review panel. Once again, thank the reviewer for the dedication to enhancing the quality of our paper.
Summary: This paper introduces a novel task that addresses the expansion of a limited dataset into a larger one by utilizing a Guided Imagination Framework. The proposed approach harnesses latent features to generate new data instances while placing emphasis on preserving class-specific information and enhancing sample diversity. By employing these criteria, the generated data effectively facilitates the improvement of the model's learning process. The efficacy of the proposed method is demonstrated through rigorous validation on multiple datasets, showcasing its superior performance. Strengths: - This paper presents a promising solution to the challenge of generating reliable supplementary training data, distinguishing itself from conventional data augmentation methods by generating unique content distinct from the existing dataset. - The paper introduces Guided Imagination Framework that leverages prior knowledge of latent features to generate additional data while maintaining alignment with class labels, ensuring consistency throughout the generated samples. - Experimental results show the remarkable effectiveness of this proposed approach in significantly enhancing the accuracy of image classification tasks, substantiating its superiority over alternative methods. Weaknesses: - A major concern arises from the results presented in Table 1, particularly regarding the Stanford Cars dataset (https://ai.stanford.edu/~jkrause/papers/fgvc13.pdf). The comparison reveals that certain alternative methods achieve superior results (>90) without the need for additional data, as demonstrated in (https://arxiv.org/pdf/2102.05918.pdf). This raises skepticism regarding the efficacy of the augmented data in genuinely improving the model's performance, as the reported results fall below the mentioned benchmark (<80). - Another point of consideration is the authors' exclusive focus on employing ResNet-50 as the sole backbone architecture for their experiments. This singular choice could introduce bias, as it remains plausible that only ResNet performs well within the proposed framework. To establish the robustness and generalizability of the proposed method, it would be valuable for the authors to explore and validate its performance across multiple alternative backbone architectures, thereby mitigating potential biases. - It is important to acknowledge that the proposed method significantly increases the training time due to the inclusion of a generative model and the utilization of substantially larger amounts of data (at least 5 times more) for training the classification model. This extensive time requirement may pose considerable challenges, particularly in classification tasks where efficiency is a crucial aspect to consider and optimize. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: - It would be valuable to receive further insights into the time required to complete the generating-training procedure. Understanding the time constraints associated with the proposed method is essential for assessing its practicality and scalability. Insights into the time requirements would enable a more comprehensive evaluation of the proposed approach in terms of efficiency and resource allocation. - A comprehensive discussion on the training process and underlying structures of the listed methods, including GIF-MAE and GIF-SD, would greatly enhance the readers' understanding of these approaches. Detailed explanations of the methodologies employed, such as the specific techniques utilized for data generation and the architectures of the models, would provide valuable insights into the novelty and effectiveness of these methods. Expanding on these aspects would facilitate a more thorough comparative analysis of the proposed approach against other techniques. - Gaining clarity on whether pretrained weights were utilized in all of the proposed methods would be beneficial for assessing the fairness of the comparison. Understanding the extent to which prior knowledge is leveraged across different approaches is crucial for interpreting and contextualizing the results. Specifically, differentiating between methods that start from scratch without any pretrained weights and those that employ pretrained models, such as CLIP, would provide a more accurate understanding of the experimental setup and potential biases. - To further validate the proposed approach, it would be beneficial to compare it with other methods that also generate synthesized data, as referenced in the provided papers (https://arxiv.org/pdf/2301.06043.pdf, https://ieeexplore.ieee.org/document/8629301, https://proceedings.mlr.press/v156/bao21a/bao21a.pdf). Conducting such comparisons would offer valuable insights into the relative strengths and weaknesses of the proposed method and provide a broader perspective on its performance compared to existing state-of-the-art techniques. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: - The generation method employed to expand dataset volumes is a time-consuming process that necessitates various preprocessing steps. Furthermore, training with a substantially larger amount of data (at least 5 times more) adds to the complexity. This undertaking should be approached with careful consideration, acknowledging the resource and time requirements involved. - In order to provide a comprehensive evaluation, it would be advantageous for the authors to conduct fair comparisons with existing models and methods, taking into account factors such as memory footprint. Considering these aspects would enable a more thorough assessment of the proposed approach in relation to its counterparts, providing a clearer understanding of its advantages and limitations. - Given that the proposed method involves the generation of new data, it is highly recommended that an ethical review be conducted to ensure compliance with ethical guidelines and considerations. The potential impact and implications of generating synthetic data should be carefully evaluated to ensure that it aligns with ethical principles, privacy regulations, and legal requirements. Such a review would demonstrate a responsible approach towards the development and application of the proposed method. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for the comprehensive and constructive comments, particularly for recognizing that our solution is promising and remarkably effective. We address all the concerns as follows. --- >**Q1. Concern on the results in Table 1, particularly regarding Cars** The experimental results are indeed reasonable based on the following reasons: - **Different training methodology**: Our focus is to expand small-scale datasets, and as such, **all the models are trained from scratch for evaluation**. This allows us to fairly compare the effectiveness of various expansion methods, without biases from pre-trained models. Please note that our task is orthogonal and complementary to the model transfer task in Align (https://arxiv.org/pdf/2102.05918.pdf), where the transfer performance inevitably surpasses training from scratch due to its extensive pre-training on large datasets. - **Additional fine-tuning results**: To address this concern, we further fine-tune an ImageNet pre-trained ResNet50 on Cars. Using the original dataset, the fine-tuned ResNet-50 achieves a performance of 87.6. Remarkably, with the expanded dataset by GIF-SD, the performance further increases to 88.9, highlighting the effectiveness of our method in enhancing model fine-tuning. The additional results further substantiate the reasonableness of this paper. --- >**Q2. Concern on the exclusive focus on using ResNet-50** Thanks for the comment, but it seems that there might have been a misunderstanding. Our study did indeed consider the robustness and generalizability of our method across various model architectures, not solely focusing on ResNet-50. Specifically, Table 3 (cf. Section 5.1) and Table 13 (cf. Appendix F.3) demonstrate our method's performance on various architectures, such as ResNext, WideResNet, and MobileNet. These results highlight an important advantage of our method: the expanded dataset can be used with various network structures without needing to regenerate, thereby affirming its broad applicability. --- >**Q3. Question about the time and costs** Thanks for the comment. Please refer to [General Response G1](https://openreview.net/forum?id=82HeVCqsfh&noteId=e7B6qvrwYr) for the detailed response. --- >**Q4. Details on GIF-MAE and GIF-SD** Thanks for the feedback. Due to the page limitations, we had to put the method details, implementation details, and pseudo-code of all proposed methods in Appendix D. We understand that the supplementary is not mandatory reading, but it is intended to provide a comprehensive understanding of what the reviewer suggests. This concern might be addressed by delving into this appendix. If the reviewer has any further questions after referring to these details, we would be more than happy to provide additional clarification. Furthermore, according to the NeurIPS guidelines, we are permitted to add one more page to the camera-ready. In such a case, we will move more crucial method details to the main text to enhance its clarity and comprehension. We hope these measures can sufficiently address the concern. --- >**Q5. Clarification on whether pre-trained weights were utilized during model training** During the model training phase, the models are trained from scratch for all dataset expansion methods (**including ours**) in all tables (including Table 4), ensuring no pre-trained weights are used for maintaining a fair comparison. The only exceptions are **CLIP-related baselines** (in Tables 1,4, and 15), where the pre-trained checkpoint of CLIP was used. We believe this clarification should offer a more accurate understanding of our experimental setup. --- >**Q6. Further comparisons to advanced generative methods** Thanks for suggesting a comparison with the mentioned generative methods. We recognize its importance, but there are two primary challenges in executing this comparison: (1) The mentioned GAN-based generative methods have not made their model checkpoints publicly available. (2) Training GANs from scratch, particularly when data is limited, frequently results in non-convergence or yields unmeaningful outcomes, making it unsuitable for small dataset expansion. Meanwhile, recent research [A] has indicated that diffusion models, thanks to iterative denoising, outperform GANs in image generation. As such, we opted for diffusion models in our work, as their checkpoints are readily available. Even so, we agree with the value of comparisons with more advanced generative methods. Therefore, we further compare our method with a recent ICLR work [B], as recommended by Reviewer 1CUd. Specifically, the method [B] proposes strategies like language enhancement (LE) and CLIP filters (CF) to enhance generative models for generating training data. For a fair comparison, we use this method with SD and our GIF-SD to expand the CIFAR100-S dataset by 5x. The results, as shown in the following table, further affirm the superiority of our method. | CIFAR100-S | Accuracy | | ---------- |:---------:| | Original | 35.0 | | 5x-expanded by SD | 52.9 (+17.9)| | 5x-expanded by SD + method [B] | 56.0 (+21.0)| | 5x-expanded by GIF-SD (ours) | 61.1 (+26.1)| [A] Diffusion models beat gans on image synthesis. In NeurIPS, 2021 [B] Is synthetic data from generative models ready for image recognition? In ICLR, 2023 --- >**Q7. Question on potential influences of synthetic data** We appreciate the question on the ethical implications. Please refer to [General Response G2](https://openreview.net/forum?id=82HeVCqsfh&noteId=e7B6qvrwYr) for the detailed response. --- Thanks for the valuable feedback. Through our rebuttal, we have clarified ambiguities, provided additional results, and deepened our discussions to address all concerns. We believe our revised work aligns more closely with the conference's standards. A positive reconsideration of the initial assessment would be greatly beneficial to our efforts. Should any further questions, we are glad to answer them. --- Rebuttal Comment 1.1: Title: Response to Authors' rebuttal Comment: Dear Authors: Thank you for your responses to the questions I raised. Some of the response addressed my concerns. Below I'd like to summarize my thoughts: 1. The novelty of the work is still limited, even though the authors have made clear explanations. Reviewer 7xaH has raised a similar idea by citing numerous papers dealing with the same dataset expansion. 2. Regarding the cost of the proposed method. It is shown by the authors that the proposed method is more effective than manual annotation for sure. However, the authors have not included the time cost for other models for comparison. The review 7xaH has also mentioned that a sufficient comparison should be made with related works. 3. The authors mentioned that they trained all the models from scratch for fair comparison without biases from pre-trained models. However they also mentioned CLIP-related baselines are using pretrained weights. There’s conflict within these two claims and thus would lead to biased comparison. Reviewer 7xaH also mentioned that additional experiments with pre-trained models are necessary. 4. The authors mentioned in general response that controlled mechanism minimizes the risks of creating unrelated or potentially harmful images. However this is questionable since no evidence nor experimental assessment was made in this aspect. The reviewer UbXT has raised similar concern that the quality and accuracy of the generated samples with perturbed features was not evaluated. In summation, while the authors have addressed some concerns raised by reviewers, I will not have access to the revised document prior to the final decision. Based on my assessment, I believe the manuscript requires substantial modifications before it is suitable for publication. Given these considerations, I maintain my initial score. --- Reply to Comment 1.1.1: Title: Follow-up Response (1/3) Comment: We greatly appreciate the time the reviewer has dedicated to reviewing our response and providing further comments. Below, we address the outstanding concerns. --- >**Q1. Concern on the novelty of the work** Thanks for the feedback. We totally understand the reviewer's concern, if we merely focus on the idea of using generative models for training data synthesis. However, the novelty of this work does not rest on the freshness of this idea, but is deeply rooted in the task importance and the method novelty. - **Task significance**: Automatic dataset expansion, especially in small-data scenarios, holds immense importance. While the concept might not be brand-new, our work provides a unique perspective on formally defining and tackling this task. The proposed dataset expansion significantly reduces human efforts and expenses associated with manual data collection, and markedly improves the model performance in small-data scenarios (cf. Table 1 in Section 5.1). These benefits are highly important for real-world applications since manual data collection is highly expensive in small-data scenarios (e.g., medical image domains). The importance of this task has been highly recognized by Reviewer 1CUd "*the task is meaningful*" and Reviewer UbXT "*the task could contribute to the development of academia and industry*". - **Method novelty**: Although using generative models to create training data is not a new idea, our method introduces a distinct design. The key of our innovation lies in the concept of guided imagination (cf. Section 3.1), coupled with two critical expansion criteria (cf. Section 3.2). These insights are underpinned by both empirical observations and theoretical analysis (cf. Section 3, Section 5.2, Appendix B, Theorem 4.1). Based on these criteria, our approach can guide generative models to create informative new samples with novel content and correct class labels for expanding datasets. In contrast, while the methods mentioned by the reviewer and Reviewer 7xaH also introduce new training data, they cannot ensure the synthesized data bring sufficient new information and accurate labels for the target small datasets. Moreover, training GANs from scratch, especially with very limited data, often fails to converge or produce meaningful results [A,B], making the mentioned GAN-based methods less effective in small-data scenarios. As such, our method emerges as a more effective way to expand small datasets. The contribution of our method has been recognized by Reviewer 1CUd "*the proposed framework is intuitive, and the criteria are well-motivated*". Please note that **after reading our rebuttal, Reviewer 7xaH also concurs with the novelty of our method.** We hope this clarification can further illustrate the unique contributions of our work. To make it clearer, we will further clarify our main contributions at the end of Section 1, and add one more paragraph in Section 2 to highlight the differences between our work and the related studies mentioned by reviewers. [A] Towards Principled Methods for Training Generative Adversarial Networks. In ICLR, 2017 [B] Training Generative Adversarial Networks with Limited Data. In NeurIPS, 2020 --- >**Q2. The authors have not included the time cost for other models for comparison** Thanks for the constructive suggestions. We're pleased to delve deeper into our comparative analysis on the time cost of data expansion and performance gains between different methods. As shown in the table below, our GIF offers a more favorable trade-off compared to other methods. Specifically, GIF-MAE has a time cost within the same magnitude as data augmentation, but it delivers much better performance gains. The slight time overhead introduced by MAE is offset by GPU acceleration, resulting in competitive time costs. This further verifies the superiority of our method. For those prioritizing performance, GIF-SD becomes a more attractive option. Although it involves a longer time due to its iterative diffusion process, it provides more significant performance gains. | Methods | Expansion speed (per image) | Time (10,000 images) | Accuracy gains over natural image datasets| | -- |:--:| :--:| :--:| | Cutout | 0.008s | 76s | +12.8 | | GridMask | 0.007s | 72s | +14.4 | | RandAugment | 0.008s | 82s | +20.5 | | GIF-MAE | 0.008s | 80s | +23.5 | | GIF-SD | 6.6s | 2h (8 GPUs) | +36.9 | Upon dissecting the time costs and performance gains, our observations can be summarized as: - **Time costs**: Diffusion-based expansion (e.g., GIF-SD) > MAE-based expansion (e.g., GIF-MAE, while GANs have similar time cost) ≈ Augmentation-based expansion - **Performance gains**: Diffusion-based expansion > MAE-based expansion > Augmentation-based expansion We hope this comparison further clarifies the merit of our approach. Following the constructive comment, we will enrich the cost analysis in Lines 343-350 with this discussion.
Summary: This paper proposes to expand a small dataset by automatically creating new labeled samples with pre-trained generative models, such as DALL-E2 and Stable Diffusion. The proposed framework, namely Guided Imagination Framework, contains two key parts, i.e., class-maintained information boosting and sample diversity promotion. The experiments of GIF-SD obtain improvements on multiple datasets. Strengths: 1. The key idea is interesting and dataset expansion task could contribute to the development of academia and industry. 2. Higher accuracy can be achieved over multiple image datasets. Weaknesses: The proposed solution to dataset expansion is too easy. The class-maintaining informativeness strategy aims to generate perturbed features with seed sample class consistency. However, how to evaluate the quality and accuracy of generated samples with perturbed features? Technical Quality: 3 good Clarity: 2 fair Questions for Authors: Please see the weaknesses Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: Please see the weaknesses Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for the effort in reviewing our paper. We are glad that the interesting idea and the important task are appreciated. We address the concerns point by point as follows. --- >**Q1. The proposed solution to dataset expansion is too easy** Thanks for the feedback on the simplicity of our solution. We believe that simplicity, when paired with effectiveness, is a strength, not a limitation. - **Simplicity as a virtue**: Simple solutions are generally more intuitive to understand, so they are usually more adaptable and more likely to be used by the community. Although our method might seem straightforward, achieving such simplicity with effectiveness requires substantial underlying effort and insights, as depicted in Section 3, Section 5.2, Appendix B, Appendix F and Theorem 4.1. These empirical observations and theoretical foundations not only justify our approach but also provide invaluable insights for future research. - **Significance of automatic dataset expansion**: Our straightforward method holds significant value. It showcases notable advancements in various small-data scenarios, such as in-distribution generalization (Table 1), out-of-distribution robustness (Table 2), and long-tailed problems (Table 22 in Appendix F.7). More importantly, it can drastically reduce the cost and time associated with human data collection/annotation for expanding small datasets (cf. the discussion in Section 5.2, Lines 343-350). In summary, while our method might appear straightforward at first glance, the depth of thought, experiments, and insights behind it attest to its uniqueness and efficacy. Hence, our method offers an important contribution to addressing the small-data challenges. Its value was also recognized by Reviewer 7xaH "*The proposed methodology is simple and easy to utilize*", Reviewer 1CUd "*the proposed framework is intuitive, and the criteria are well-motivated*", and Reviewer J4xT "This paper presents a promising solution". --- >**Q2. The class-maintaining informativeness strategy aims to generate perturbed features with seed sample class consistency. However, how to evaluate the quality and accuracy of generated samples with perturbed features?** The goal of our method is to create informative new samples that, when used in conjunction with the original data, improve the model performance in small-data scenarios. Directly assessing the perturbed features might not provide an accurate reflection of how beneficial the generated samples are when it comes to training a model. Therefore, a more effective way to judge the quality of the generated samples is to directly observe their enhancement in the final model performance. In light of this, this work evaluated various dataset expansion methods by directly measuring the performance of models trained on their expanded datasets. In addition, we would like to highlight that the class-maintaining property of our method is pivotal. This consistency is reinforced by the objective function outlined in Lines 207-208, ensuring that perturbed features remain aligned with the seed sample class. By guaranteeing this class consistency, our method can bring novel information to model training without the need to concern class misalignment of the generated samples. The ablation results of GIF-DALLE (cf. Table 17 in Appendix F.5.1) further demonstrate the effectiveness and importance of our class-maintaining strategy. To clarify, we provide the related result below (please refer to Appendix F.5.1 for more detailed analyses of the result). | Methods | Class-maintained strategy | Diversity promotion strategy| Accuracy of 5x-expanded CIFAR100-S | | ---------- |:---------------:|:---------------:|:---------------:| | GIF-DALLE | ✖ | ✖ | 52.1 | | GIF-DALLE | ✔ | ✖ | 53.1 | | GIF-DALLE | ✖ | ✔ | 51.8 | | GIF-DALLE | ✔ | ✔ | 54.5 | --- We genuinely appreciate the effort the reviewer has dedicated to reviewing our work. We have diligently addressed the concerns in the rebuttal, emphasizing the novel contributions and the potential value of our work. We respectfully request the reviewer to reconsider our paper with these clarifications in mind. A positive re-evaluation would be immensely beneficial to our efforts. We remain dedicated to addressing any further questions.
Summary: This paper describes a new task called dataset expansion, which aims at expanding the size of small datasets to boost the performance of data-driven AI models on tasks like object classification. The paper proposes a framework called Guided Imagination Framework (GIF) to achieve it by utilizing pre-trained large-scale generative models to synthesize new informative samples according to the images in the dataset. Specifically, the method perturbs the latent feature of the examplar image in the dataset and designs two criteria, i.e., *class-maintained information boosting* and *sample diversity promotion*, to optimize the noise added to the latent feature of the exemplar image. The paper conducts extensive experiments and verifies the effectiveness of the proposed method in boosting the classification performance on small datasets of natural images and medical images. Strengths: This paper is well-organized and may be one of the first research to explore the effectiveness of synthetic images in boosting classification performance. The task is meaningful, the proposed framework is intuitive, and the criteria to maintain the class information and to encourage diversity are well-motivated. The experiments are systematically conducted and show that the proposed framework helps improve performance significantly on various backbones. The ablation study in the appendix shows the effectiveness of the proposed two criteria. Weaknesses: 1. Though the authors give two reasons for not using CLIP for classifying the target dataset: (1) the transferability: Line#298-300; (2) GIF can benefit various model architectures. However, as one of the state-of-the-art backbones for classification, the proposed method's effectiveness in boosting the performance of CLIP on natural images needs to be justified. 2. I think the comparison with the few-shot setting in [22] is required to prove the superiority of the method, as the few-shot setting in [22] shares similarities with the paper's task. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. I have some questions about GIF-SD. Does GIF-SD first sample several Gaussian noises $z$ and then perform the inverse diffusion process with the text prompt condition to obtain latent features $f$ which are perturbed by Eq.1 to obtain $f^{'}$ and then fed into the image decoder? In that case, since $f^{'}$ is not the direct perturbation of the feature of the real image sample in the dataset, I think GIF-SD is quite different from GIF-DALLE and GIF-MAE and may fail to guarantee the resemblance to the image samples in the dataset. Or does GIF-SD use a strategy similar to the real guidance (RG) strategy in [22]? 2. The compared methods include DALL-E2, SD, and MAE, but how exactly these methods are used to generate samples are not clear. Have the authors tried their best to optimize the way these models generate samples, e.g., as does in [22] using language enhancement (LE) and CLIP Filter (CF)? This is important since it can faithfully reflect the proposed method's superiority. 3. I suggest the authors report the domain gap between the real image samples and the synthetic image samples, e.g., using FID or LPIPS. And the analysis of how the domain gap affects the performance is favored. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors are encouraged to discuss the limitations of the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for the insightful feedback, particularly for recognizing the significance of the studied task and our proposed method. We next address the concerns as follows. --- >**Q1. Effectiveness in boosting CLIP fine-tuning** Thanks for pointing this out. In fact, we have evaluated the effectiveness of our method in boosting CLIP fine-tuning. As shown in Table 15 (cf. Appendix F.4.3), our method significantly enhances the fine-tuning performance of CLIP VIT-B/32 on CIFAR100-S, elevating the accuracy from 75.2% to 79.4%. During the rebuttal, we further find that dataset expansion is also beneficial to the out-of-distribution robustness of the fine-tuned CLIP model, as shown in the following table. | CIFAR100-S dataset | Accuracy on CIFAR100-C | | ---------- |:---------------:| | Training from scratch on original dataset | 23.6 | | CLIP fine-tuning on original dataset | 55.4 (+31.8) | | CLIP fine-tuning on 5x-expanded dataset by GIF-SD | 61.4 (+37.8) | These results further underscore the potency of our method. --- >**Q2. Does GIF-SD use a strategy similar to the real guidance (RG) strategy in [22]?** Yes, GIF-SD is implemented with a strategy similar to the real guidance (RG) strategy in [22], which has been detailed in Appendix D.2, including the pseudo-code and other implementation details. To offer further clarity, we summarize the overall pipeline here. GIF-SD first conducts text-guided latent diffusion based on the latent feature of the seed data as the starting point (instead of random noise). After text-guided diffusion, GIF-SD conducts guided imagination based on Eqs. (1-2) and the image decoder. This ensures that the generated samples are class-maintained (cf. visualization results in Appendix G) and bring sufficient new information for boosting model performance (cf. Table 1 in Section 5.1). --- >**Q3. Comparison with the few-shot setting in [22]?** Following the constructive suggestion, we adopt the advanced few-shot strategies in [22] (i.e., language enhancement (LE) and CLIP Filter (CF)) to expand the CIFAR100-S dataset based on Stable Diffusion (SD) and real guidance (RG). As shown in the following table, SD combined with these strategies [22] is still noticeably inferior to our GIF-SD for both training from scratch and CLIP tuning. This further demonstrates the superiority of our method. | CIFAR100-S | Training from scratch | CLIP fine-tuning | | ---------- |:---------------:| :---------------:| | Original dataset | 35.0 | 75.2 | | 5x-expanded dataset by SD+LE+CF [22] | 55.1 (+20.1)| 77.0 (+1.8) | 5x-expanded dataset by GIF-SD (ours) | 61.1 (+26.1)| 79.4 (+4.2) | --- >**Q4. Implementation of DALL-E2,SD and MAE baselines** These baselines were set up identically to their GIF counterparts for ablated evaluation, with the only exception being the absence of the guided imagination optimization (cf. Section 4). To ensure a consistent comparison, we did not leverage other strategies, like LE and CF [22], for both the baselines and our methods. We agree with the reviewer that integrating these strategies might potentially enhance the quality of data generation. Nevertheless, as shown in the table responding to **the above Q3**, SD using these strategies still performs worse than our GIF-SD, which further verifies our superiority. --- >**Q5. Analyzing the relations between the domain gap and model performance** Thanks for the insightful suggestion. In response, we further compute the Fréchet Inception Distance (FID) between the synthetic data generated by different methods and the original data of CIFAR100-S. The results are summarized in the table below: | Datasets | FID | Accuracy | | ---------- |:---------:| :--------:| | Original CIFAR100-S dataset | - | 35.0 | | 5x-expanded dataset by RandAugment | 24.3 | 46.7 | | 5x-expanded dataset by Cutout | 104.7 | 44.3 | | 5x-expanded dataset by Gridmask | 104.8 | 48.2 | | 5x-expanded dataset by GIF-MAE | 72.3 | 52.7 | | 5x-expanded dataset by GIF-DALLE | 39.5 | 54.5 | | 5x-expanded dataset by GIF-SD | 81.7 | 61.1 | Interestingly, while one might initially assume that a lower FID implies better quality for the expanded data, the actual performance does not consistently follow this notion. For instance, even though GIF-SD has a worse FID than GIF-DALLE, it achieves better performance. Likewise, despite having nearly identical FIDs, Cutout and Gridmask lead to different performance. These results suggest that the effectiveness of dataset expansion methods depends on how much additional information and class consistency the generated data can provide to the original dataset, rather than the distribution similarity between those samples and the original data. In summary, we are grateful for the thought-provoking question. We believe this discussion will spark further research into the relationship between expansion effectiveness and data fidelity (as measured by metrics like FID), potentially guiding the development of even more effective dataset expansion techniques in the future. --- >**Q6. The authors are encouraged to discuss the limitations of the paper** Thanks for the valuable suggestion. We will enrich our supplementary with a limitation discussion. Due to the word constraints of this rebuttal, we kindly direct the reviewer to our detailed response to [Reviewer 6GPi's Q7](https://openreview.net/forum?id=82HeVCqsfh&noteId=CerYJBcKIX), where we delve into this discussion. --- Thanks again for the thoughtful feedback. We have diligently addressed each concern in our rebuttal, incorporating fresh experimental results and deeper discussions. The reviewer's insights have been pivotal in enriching our work, which aligns more closely with the conference's standards. We hope our updates address the concerns and might encourage a positive reconsideration of the initial assessment within the review panel. --- Rebuttal Comment 1.1: Title: Thanks for the rebuttal Comment: After carefully reading the rebuttal, I think the authors have properly addressed my concerns. Consequently, I decided to raise my score accordingly. --- Reply to Comment 1.1.1: Comment: We deeply appreciate the thoughtful re-evaluation and recognition of our work's merit. The revised score and strong support boost our confidence and play a pivotal role in the review discussions. We believe that a champion like the reviewer, with a clear understanding of our research's strengths, can encourage the wider review panel to fully recognize and appreciate our contributions.
Rebuttal 1: Rebuttal: **General Response** --- We deeply appreciate all the reviewers for dedicating time and effort to reviewing our paper. Here, we first address the general question below, and subsequently, we will provide detailed responses to each reviewer's specific questions and comments. --- >**G1. Concern about the time and cost of dataset expansion (Reviewers 7xaH and J4xT)** The primary goal of dataset expansion is to mitigate the time and cost of human data collection/annotation and boost model performance in small-data scenarios. Please note that improving model performance on small-scale datasets inevitably incurs certain costs. For instance, in the context of transfer learning, the total cost—including time and resources for collecting and pre-training on large-scale datasets (such as large-scale medical image datasets)—could be significantly high. In this work, we introduce a complementary task paradigm to mitigate the human labor and financial costs associated with data collection, and also bolsters model performance. In this context, dataset expansion and model training are treated as two distinct phases, and we differentiate the analysis for each phase as detailed below. **I. Dataset expansion**: As discussed in Section 5.2 (Lines 343-350), GIF-SD can expand an image by 5 times in just 33 seconds using a V100 GPU, so it can significantly save time and cost of dataset expansion compared to manual data collection/annotation. Specifically, manually annotating 10,000 images, according to Masterpiece Group (https://mpg-myanmar.com/annotation), would typically **take 2 weeks and cost around \$800**. In contrast, GIF-SD can generate the same volume of labeled data in a **mere 2 hours, costing roughly \$40 for renting 8 V100 GPUs**. Moreover, if higher efficiency is pursued with an acceptable performance drop, GIF-MAE can create 10,000 labeled data in **just 80 seconds, at a cost of about \$0.48 for renting 8 V100 GPUs**. Note that once the dataset is expanded, it can be directly utilized to train various models, removing the need for regeneration with each model and thereby further enhancing efficiency. | Methods | Expansion speed | Time (10,000 images) | Costs (10,000 images) | | ---------- |:---------------:| :---------------:| :---------------:| | Human data collection | 120.96s per image | 2 weeks | $800 | | GIF-MAE (ours) | 0.008s per image | 80 seconds | $0.48 | | GIF-SD (ours) | 6.6s per image | 2 hours | $40 | **II. Model training**: The training time varies based on the specific datasets. However, it is pivotal to note that all dataset expansion methods were compared based on the same expansion ratio, thus ensuring consistent training time/cost and fair comparisons. We acknowledge that training on an expanded dataset will inevitably take longer than training on the original dataset. However, as shown in Table 1 (cf. Section 5.1), the significant improvement in model performance (i.e., by 36.9% on average over six natural image datasets and by 13.5% on average over three medical datasets) makes the increased investment in training time worthwhile. More importantly, a detailed analysis presented in Appendix F.1.3 (cf. Table 10) shows that, even with the same training consumption (in terms of sample number × training epochs), our proposed method still proves advantageous. More specifically, as shown in the following table, training the model on the original CIFAR100-S dataset for 5x more epochs performs much worse than the model trained on our 5x-expanded dataset. This comparison further underscores the effectiveness of our method in achieving higher accuracy without inflating training costs. | CIFAR100-S | Training epoch | Consumption (data number x epoch) | Accuracy | | ---------- |:---------------:| :---------------: | :---------------:| | Training on original dataset | 100 | 1 million | 35.0 | | Training on original dataset with RandAugment | 100 | 1 million |39.6 | | Training on original dataset with RandAugment | 600 | 6 million| 51.1 | | Training on 5x-expanded dataset by GIF-SD (ours) | 100 | 6 million| 61.1 | To summarize, based on the comprehensive analysis, we firmly believe that the efficiency and cost-effectiveness of our method outweigh the concerns related to computational consumption. --- >**G2. Question on potential influences of the synthetic data (Reviewer J4xT)** Ethical considerations, especially in AI research and data generation, are indeed paramount. Given the importance of this topic, we have opted to address it in the general response, even though it was specifically raised by Reviewer J4xT. In fact, our approach is constructed with care to avoid negative ethical implications, as evidenced in the following points: - **Controlled generation**: In our approach, the generation of synthetic data is driven by our expansion guidances, which ensure that new data is derived directly and meaningfully from the original dataset. This controlled mechanism minimizes the risks of creating unrelated or potentially harmful images. - **No personal or sensitive data**: It is also worth noting that our method primarily focuses on public available datasets like CIFAR, Stanford Cars, and similar, which do not contain personal or sensitive information. As such, the risks related to privacy breaches or misrepresentations are substantially diminished. - **Ethical commitment**: While our research aims to showcase the technical capabilities of our method, we agree with the significance of upholding ethical standards in technology. In future applications, if applying our method to more sensitive domains, we commit to seek ethical evaluations to ensure responsible use. We hope these clarifications underscore our commitment to responsible AI and address the concern.
NeurIPS_2023_submissions_huggingface
2,023
Summary: This paper explores new dataset expansion task that solves data scarcity of small datasets while minimizing costs. They leveraged generative models to create an automatic data generation pipeline. Strengths: - Dataset expansion is interesting and useful research topic for small-scale domain. - The proposed methodology is simple and easy to utilize. - Extensive experiments demonstrate the proposed method helps performance improvement in small benchmark as well as model generalization. Weaknesses: - My major concern is the novelty of the task. I agree that new sample generation can help classifier to be more robust. However, this task (dataset expansion) is not a novel task to me, and I don't think the experimental results are surprising or interesting. Recent works [A,B,C,D,E] for various computer vision tasks have explored generative modeling to solve the limited labeled data. - [A] W. Wu et al., "DiffuMask: Synthesizing Images with Pixel-level Annotations for Semantic Segmentation Using Diffusion Models" - [B] S. Azizi et al, "Synthetic Data from Diffusion Models Improves ImageNet Classification" - [C] G. Gu et al., "CompoDiff: Versatile Composed Image Retrieval With Latent Diffusion" - [D] Q. Kong et al., "Active Generative Adversarial Network for Image Classification" - [E] V. Sandfort et al., "Data augmentation using generative adversarial networks (CycleGAN) to improve generalizability in CT segmentation tasks" - Although authors explain expansion efficiency in Fig.4 and Sec.5.1, generation time of diffusion model and conversion speed using GIF in training may be slow. So, I wonder the training efficiency in terms of cost and time. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: - I wonder the impact/synergy of the proposed method for pretrained imagenet models (like ViT), compared to the results in Table 1. Previous methods for data augmentation also explains their superiority under finetuning protocol. - I guess the linear-probing or finetuning of CLIP will show comparable results for Table 2. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: No. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks a lot for the comments, particularly for recognizing that the task is useful and the proposed method is simple and easy to use. We address the concerns as follows. --- >**Q1. Concern about the novelty of our work** Although expanding datasets is not a completely new concept, our work still presents valuable contributions to the community in the following aspects: - **Task importance**: Automatic dataset expansion, especially for small-data scenarios, bears great significance. While the concept might not be brand-new, our work provides a unique perspective on formally defining and tackling this task. The proposed dataset expansion can substantially reduce the time and costs involved in human data collection/annotation (cf. Section 5.2, Lines 343-350), and significantly enhances the model performance in small-data scenarios (cf. Table 1 in Section 5.1). The importance of this task has been highly recognized by other reviewers, such as Reviewer 1CUd "*the task is meaningful*", Reviewer UbXT "*the dataset expansion task could contribute to the development of academia and industry*" and Reviewer 6GPi "*the work has the potential to attract wide attention*". - **Method novelty**: Although using generative models to create training data is not a new idea, our method introduces a distinct design. The crux of our innovation lies in the concept of guided imagination (cf. Section 3.1), coupled with two critical expansion criteria (cf. Section 3.2). These insights are underpinned by both empirical observations and theoretical analysis (cf. Section 3, Section 5.2, Appendix B, Theorem 4.1). Based on these criteria, our approach can guide generative models to create informative new samples with novel content and correct class labels for expanding datasets. While the mentioned methods [A-E] also introduce new training data, they cannot ensure the synthesized data bring sufficient new information and accurate labels for the target small datasets. Moreover, training GANs from scratch, especially with very limited data, often fails to converge or produce meaningful results, making the method like [D,E] less effective in small-data scenarios. As such, our method emerges as a more effective way to expand small datasets. The contribution of our method has been highly recognized by Reviewer 1CUd "*the proposed framework is intuitive, and the criteria are well-motivated*", and Reviewer J4xT "This paper presents a promising solution". We hope that this clarification can illustrate the unique contributions of our work and address the concern adequately. [A] DiffuMask. Arxiv, 2023-03-21 [B] Synthetic Data from Diffusion Models Improves ImageNet Classification. Arxiv, 2023-04-17 [C] CompoDiff. Arxiv, 2023-03-21 [D] Active Generative Adversarial Network for Image Classification. In AAAI, 2019 [E] Data augmentation using CycleGAN. Scientific Reports, 2019 --- >**Q2. Question about the cost and time** Thanks for the comment. Please refer to [General Response G1](https://openreview.net/forum?id=82HeVCqsfh&noteId=e7B6qvrwYr) for the detailed response. --- >**Q3. The effectiveness in model fine-tuning** Thanks for mentioning the potential synergy between dataset expansion and model fine-tuning. They indeed complement each other. As discussed in Appendix F.4.3 (cf. Table 15), our dataset expansion also improves the fine-tuning performance of pre-trained CLIP VIT-B/32 on CIFAR100-S. For clarity, we provide the related results below. | Methods | CIFAR100-S | | ---------- |:---------------:| | Training from scratch | 35.0 | | Zero-shot CLIP VIT-B/32 | 41.6 (+6.6) | | Fine-tuning CLIP VIT-B/32 on original dataset | 75.2 (+40.2) | | Fine-tuning CLIP VIT-B/32 on 5x-expanded dataset by GIF-SD | 79.4 (+44.4) | During the rebuttal, we further find that our dataset expansion also boosts the fine-tuning performance of models like ImageNet pre-trained ResNet-50 on Cars, as evident in the following table. | Methods | Cars | | ---------- |:---------------:| | Fine-tuning ImageNet pre-trained ResNet-50 on original dataset | 87.6 | | Fine-tuning ImageNet pre-trained ResNet-50 on 5x-expanded dataset by GIF-SD | 88.9 | In summary, our dataset expansion works effectively with model fine-tuning across various datasets and architectures, which further verifies our effectiveness. --- >**Q4. I guess the finetuning of CLIP will show comparable results for Table 2** Table 2 aims to fairly compare dataset expansion methods in the out-of-distribution (OOD) setting, so we train all models from scratch. We agree with the reviewer that fine-tuning a pre-trained CLIP, benefiting from its extensive pre-training on large datasets, would undoubtedly yield superior OOD performance. However, as discussed in the response to **the above Q3**, dataset expansion and model transfer are distinct yet complementary paradigms. They are not in competition, but rather can be synergistically combined. Therefore, our method can also enhance the OOD performance of fine-tuned CLIP models on CIFAR100-C, as demonstrated in the following table. Thanks again for the insightful feedback, and we will incorporate the new result into the revised manuscript. | CIFAR100-S dataset | OOD Accuracy on CIFAR100-C | | ---------- |:---------------:| | Training from scratch on original dataset | 23.6 | | Training from scratch on 5x-expanded dataset by GIF-SD | 43.3 (+19.7)| | CLIP fine-tuning on original dataset | 55.4 (+31.8) | | CLIP fine-tuning on 5x-expanded dataset by GIF-SD | 61.4 (+37.8) | --- We deeply value the reviewer's dedication to assessing our paper. Based on the constructive feedback, we have endeavored to address every concern and have incorporated new results and discussions to that end. We kindly ask the reviewer to re-evaluate our paper in light of these enhancements, which enable our work to better align with the conference's standards. Should additional concerns arise, we are glad to resolve them. --- Rebuttal Comment 1.1: Title: Thanks to the authors for rebuttal. Comment: Thanks to the authors for responding to my questions. - I do not refute the importance of this task, but I want to emphasize that this task is not a novel task presented for the first time in this paper, and should be presented through sufficient comparison with missing related work. - I agree the novelty of proposed method for guided imagination. I have additional questions for additional experiments with pre-trained model. - How does it compare to using other augmentation methods on pre-trained networks? The performance gap between training with original datasets and with GIF-SD seems to be very marginal compared to the scratch model. - In practice, these days, most small data, zero-shot, or few-shot settings adopt a method of recycling pre-trained model parameters, and I think it would be good to add experimental results on this part to the main text. --- Reply to Comment 1.1.1: Comment: > **Q1: I do not refute the importance of this task, but I want to emphasize this task is not a novel task presented for the first time, and should be presented through sufficient comparison with missing related work. I agree the novelty of the proposed method.** We sincerely appreciate the follow-up comments, particularly in recognizing the importance of our task and the novelty of our method. While the concept of dataset expansion may not be entirely new, we believe the contributions the reviewer has recognized hold substantial value for the community. We agree with the reviewer that it is beneficial to provide a discussion about the missing related work [A-E]. Following the suggestion, we will incorporate the discussion on [A-E] (cf. the original rebuttal) into the "Related Work": - Moreover, recent methods [A-E] also explored generative models to generate new data for model training. However, these methods cannot ensure that the synthesized data bring sufficient new information and accurate labels for the target small datasets. Moreover, training GANs from scratch, especially with very limited data, often fails to converge or produce meaningful results, making the methods like [D,E] less effective in small-data scenarios. As such, our method emerges as a more effective way to expand small datasets. We believe this revision can further ensure a thorough contrast of our work with prior studies. --- > **Q2: Comparisons to augmentation methods in model fine-tuning. The performance gap seems to be marginal compared to the scratch model.** We thank the reviewer for the comment. Here, we first provide a detailed explanation for the modest performance gains in model fine-tuning. - As highlighted in our original rebuttal, both dataset expansion and pre-trained model fine-tuning address the small-data problem. That is, model pre-training on large-scale datasets can mitigate the challenges associated with limited data training, which naturally diminishes the headroom of performance improvement via dataset expansion. As a result, the performance gains realized by integrating dataset expansion with model fine-tuning are understandably less significant than those achieved when training from scratch. Following the suggestion, we provide additional results related to model fine-tuning. - **Superiority of our method over augmentation in model fine-tuning**: While the gains in model fine-tuning can be modest due to the aforementioned reasons, our method still delivers noticeable performance improvement. The table below shows that when expanding CIFAR100-S for CLIP model fine-tuning, our GIF-SD yields a significant advantage over both RandAugment and an advanced training data generation method [A] suggested by Reviewer **1CUd**. These results further verify the superiority of our approach. | CIFAR100-S | CLIP fine-tuning | | -- |:--:| | Original dataset | 75.2 | | 5x-expanded by RandAugment | 77.7 (+2.5) | | 5x-expanded by SD+LE+CF [A] | 77.0 (+1.8) | | 5x-expanded by GIF-SD (ours) | 79.4 (+4.2) | - **Broader applicability of dataset expansion compared to model fine-tuning**: A salient advantage of our dataset expansion is its adaptability to different image domains, whereas model fine-tuning is largely constrained by the correlation between the pre-training and fine-tuning datasets. When there is a significant disparity in image nature (e.g., from natural images to medical images), the effectiveness of fine-tuning diminishes. This is evident in the following table, where fine-tuning a CLIP pre-trained model on medical image datasets only shows modest improvements, lagging behind the gains from our dataset expansion. Such a limitation in model fine-tuning was also observed in prior works like [B]. This finding further pinpoints the importance of dataset expansion, especially in scenarios where suitable in-domain pre-trained models are not readily available. | Methods | PathMNIST | BreastMNIST | OrganSMNIST| | -- |:--:| :---:| :--:| | Training from scratch on original dataset | 72.4 | 55.8 | 76.3 | | CLIP fine-tuning on original dataset | 78.4 (+6.0) | 67.2 (+11.4) | 78.9 (+2.6) | | Training from scratch on 5x-expanded dataset by GIF-SD | 86.9 (+14.5) | 77.4 (+21.6) | 80.7 (+4.4) | In sum, these new results further verify the superiority and effectiveness of our method. In light of this constructive suggestion, we will incorporate this discussion into Section 5.1. **Reference**: - [A] Is synthetic data from generative models ready for image recognition? In ICLR, 2023 - [B] Transfusion: Understanding transfer learning for medical imaging. In NeurIPS, 2019 --- We are deeply grateful for the reviewer's sustained engagement. Thanks to the reviewer's constructive suggestions, the overall quality of our paper has been further improved. In light of these improvements, we humbly request the reviewer to re-evaluate our paper. Should there be any further questions, we are glad to address them.
null
null
null
null
null
null
One-step differentiation of iterative algorithms
Accept (spotlight)
Summary: In bilevel optimization, or optimization problems with equilibrium constraints, computing a derivative of the upper-level problem is a well-known stumbling block as this requires "differentiating through" the solution to the lower level problem. This paper provides a theoretical study for one approach to overcoming this issue, __one-step differentiation__, also known as Jacobian-free Backpropagation. Although one-step differentiation is deceptively simple, the core contribution of this paper is that it is safe to use in many situations. Theoretical results are bolstered by some numerical results. In addition there are several nice examples, remarks, and corollaries that could be of use to practitioners (for example, how to improve the quality of the one-step derivative by using a k-step approach instead). Strengths: - In my opinion, this paper's biggest strength is that it has a simple message ("One-step differentiation works in many situations") and conveys this message in a clear and accessible to non-experts way. - There are several little gems of wisdom sprinkled throughout this paper, e.g. applying one-step differentiation to $F^K$ instead of $F$ (top of pg. 4), using different operators for the forward and backward pass for implicit differentation (Remark 2). - I like the bound on hypergradient approximation (Corollary 4). - The clear comparisons between the complexities of various gradient estimation approaches contained in Table 2 is great. Weaknesses: 1. In addition to Automatic Differentiation (AD) and Implicit Differentiation (ID), there is a third benchmark approach the authors should consider, namely _Inexact Automatic Differentiation_ (IAD) (see for example [1,2]). In particular, [2] makes a surprising connection between IAD and ID, showing they are essentially the same. I'd like to see some discussion of the IAD approach in this paper, and perhaps even an inclusion of this approach into the numerical experiments. 2. See "Questions" section below for some questions on the relationship between your work and that of [3]. *Minor Issues* 3. the phrase "piggyback recursion" is used for the first time in line 207. I suggest mentioning this terminology immediately after equation (2) and providing a reference. 4. In addition to [34] (Vlastelica et al), I'd suggest citing [4], which predates [34] and also considers one-step differentiation for polytope-constrained lower-level problems. 5. The notation in Lemma 3 is confusing. I'd strongly suggest using $\theta_i$ instead of $x_k$ and $g$ instead of $f$, as the Lemma is intended to be applied to the upper level problem. 6. I have several comments regarding Fig. 3: - The time displayed is to calculate a single gradient, right? If yes, this should be made clearer in the caption. - The x-axis needs to be changed. I'd suggest $5\times 10^{5}$ instead of 50000 and so on. - What does the shading represent? - See comment 1 about Inexact Automatic Differentiation. 7. Could you say more about how you randomly generate QP instances (pg 8--9) in particular, how do you ensure feasibility? *Typos etc.* 8. "well-foundness" in line 10 should be "well-foundedness". 9. "a important" in line 12 should be "an important". 10. "relies" on line 20 should be "rely". 11. "distance with" on lone 33 should be "distance to". 12. "quantiative" on line 58 should be "quantitative" 13. "parameteric" on line 72 should be "parametric" 14. "Superlinar" on line 144 should be "Superlinear" 15. "guaranties" on line 199 should be "guarantees" [1] _Automatic Differentiation of Some First-Order Methods in Parametric Optimization_ by Mehmood and Ochs (2019) [2] _Analyzing Inexact Hypergradients for Bilevel Learning_ by Ehrhardt and Roberts (2023) [3] _JFB: Jacobian-Free Backpropagation for Implicit Networks_ by Wu Fung et al (2022) [4] _Learn to Predict Equilibria via Fixed Point Networks_ by Heaton et al (2021) Technical Quality: 3 good Clarity: 3 good Questions for Authors: In [3] implicit networks with a head and tail ($S_{Theta}$ and $Q_{Theta}$ in their notation) are considered. This complicates the analysis of the hypergradient, see the proof of Theorem 3.1, particularly when analyzing the inner product between the true hypergradient and the approximation $p_{\Theta}$. Your analysis of hypergradients (i.e. Corollary 4) works out nicely, partly because you consider the distance between true and approx. hypergradient, not the inner product. Can you apply this to get quantitative bounds for the setting considered in [3]? [3] _JFB: Jacobian-Free Backpropagation for Implicit Networks_ by Wu Fung et al (2022) Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We greatly thank the referee for his detailed comments and evaluation of our manuscript. 1. We will mention Inexact AD and discuss the references proposed by the referee in the related work section. In our situation, it is not really clear how to apply inexact AD. Indeed, inexact AD is often applied to first-order methods, where the evaluation of the Jacobian of a single iteration is cheap, and many iterations are performed. In the context of Figure 3, we consider super-linearly convergent algorithms, typically variations of Newton's method. In this setting, evaluation of a single iteration is costly (typically similar as implicit differentiation) with very few iterations performed. One step differentiation is "exact" in the sense that it finds the same derivative as implicit differentiation up to numerical errors (in our experiments) while inexact AD will necessarily have some time overhead as it requires more operations compared to one-step differentiation (form the full Jacobian and perform fixed point iterations). We, therefore, believe that the settings where one-step differentiation has good performances are not favorable settings for inexact AD (and vice versa). We see the two approaches as complementary, which we will discuss in a revised version of the paper. For this reason, we will not include timing experiments for inexact AD as the comparison would not be really fair (for the same reason, we did not include forward AD, or unrolling, in our experiments). 3. Indeed, we will mention the term piggyback right after (2). 4. We will include the cited references in the related work section. 5. The referee is right, Lemma 3 will be modified, and we will add a corollary to specify it for bilevel problems. 6. We will take the remarks of the referee into account regarding Figure 3. In particular, the shaded area is the standard deviation over 10 repetitions of the experiment. 7. To generate the QP instances, the quantities $n, m, p$ are varied on a logarithmic scale. We set $Q = M^TM$ where $M$ is of size $n \times n$ with entries uniform in [-1,1]. Constraint matrices $A$ and $G$ also have entries uniform in [-1,1], and we chose $m$ and $p$ to be smaller than $n$ so that feasibility is generic and occurs with probability one. This will be discussed in a dedicated appendix. We will correct all the typos pointed out by the referee. Question: Thanks for this nice question. The ideas developed in [3] are indeed similar, but the theoretical results are quite different. The main assumption in [3] is (3.3), which is pretty different from ours: - It does not put any constraint on the magnitude of the derivative with respect to parameters (theta). - It requires good conditioning: Jacobian matrices (with respect to parameters theta) should be close to identity, and the level of closedness should be balanced with respect to the contraction factor ($\gamma$ in [3]). The results are, therefore, quite different in nature (descent direction versus quantitative estimation). We will not be able to obtain quantitative estimation under the exact same setting as [3] (since assumptions are different), but we will be able to obtain quantitative estimates under suitable assumptions. These will be exactly the same as in [3] and will be essentially the same as what we developed in Corollary 4. We will add a remark regarding this after Corollary 4 and in comparison with the existing work section. --- Rebuttal Comment 1.1: Comment: Great, thanks for addressing all my questions! I have no further comments and hope to see this paper accepted.
Summary: In this paper, the authors consider the problem of differentiating the fixed-point of an algorithm with recursive update, with respect to some parameter of the algorithm. They propose a one-step automatic differentiation technique where, once having approximated the fixed-point through the recursive algorithm, they back-propagate only through the last step. They provide a theoretical and numerical comparison of their one-step technique with the existing Automatic and Implicit Differentiation techniques. They show that the technique converges for superlinear algorithms, and incurs an error for linear algorithms which depends on the rate of convergence. For linear algorithms, they propose to use K-step technique where K depends on the condition number. They compare the time complexity of the three strategies on the Newton's method applied to weighted logistic regression and the interior point method applied to constrained quadratic programming problem. They also show the K-step technique on gradient descent applied to weighted ridge regression. Strengths: -> For super linearly convergent algorithms and for fast linear algorithms, the authors demonstrate * Easy implementation of Automatic Differentiation, and * Time and Memory complexity of Implicit differentiation. -> The derivative error of their one-step technique is shown to be depended on the error in the estimation of the solution and the convergence rate (which is zero for superlinear algorithms). -> Except for the third example in Section 4 and in particular Figure~4, the paper is well-written and structured and very easy to follow. The proofs are quite easy to understand. Weaknesses: -> For large scale applications (n >> 1), memory and time complexities of one update step of super-linear algorithms do not scale well with n: - Time: O(n^2) for quasi-newton methods, O(n^3) for Newton methods. - Memory: O(n^2) for both. -> In such cases, we resort to the first order methods where the convergence rate is non-zero. For ill-conditioned problems, the authors suggest K-step back-propagation with K = 1/\kappa but then that is just truncated back-propagation and we still have a memory overhead. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: -> In implementations of Algorithms 1, 2 and 3 for Bilevel Optimization, did the authors first construct the jacobians and then evaluate J^T \grad g? Because that is quite inefficient. For bilevel Optimization, implementing VJP for each algorithm is the suitable way. The authors should mention that. -> In Table 2, I think the authors should clarify that the time complexity of Piggyback recursion (2) is not the same as that of the forward mode AD recursion (1). I believe that is kn\omega C_F. Similarly, the memory complexity is of forward mode AD is n. -> Figure 4 lacks explaination. I understand that the weighted ridge regression example in Section 4 is aimed to depict the behaviour of the three techniques on gradient descent applied to an ill-conditioned problem. But I don't really understand what is shown in Figure 4. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: Not applicable. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the referee for his thoughtful comments. Indeed, super linearly convergent algorithms / second-order like methods are fast, but the price to pay is that each step is very expensive (compared to first-order methods, for example). There is no free lunch, and in that case, our analysis only makes sense practically for relatively small-scale scenarios or very well-conditioned problems, but this is a problem-dependent feature, and one does not really have control of this. Regarding the memory overhead of K step differentiation, there is indeed an overhead, but it could be relatively small if K remains not too large (e.g., well-conditioned problems). Questions: - We did form Jacobians of the iterative process (F) and solved linear systems using a dedicated solver (in Jax). The inversion was not done based on VJP (using fixed point iterations or conjugate gradient) because, for the relatively small scale of our problems, linear algebra was more efficient. - We did not form the full implicit Jacobian in equation (3) using full matrix inversion; we only solved a linear system involving a left incoming vector in equation (3). In this sense, we did implement VJP for implicit differentiation. - We will make this precise in a dedicated appendix in the revised version. Point 2: The reviewer is right; we will add a line with an evaluation of forward AD, which should be slightly more favorable than the piggyback recursion as Jacobian matrices are not explicitly formed and multiplied. Point 3: We agree and will revise the discussion in the weighted ridge regression section and better explain the content of Figure 4 with more detailed comments.
Summary: The paper presents a method called one-step differentiation, also known as Jacobian-free backpropagation, as an alternative to automatic and implicit differentiation. The authors analyze the theoretical approximation of one-step Jacobian and provide specific examples such as Newton's method and gradient descent, on which the authors showed explicit convergence of the one-step case to ground truth solution.. They demonstrate the efficiency of one-step differentiation in bilevel optimization and provide numerical illustrations using logistic regression, interior point solver for quadratic programming, and weighted ridge regression with gradient descent. On these examples, the authors demonstrate the efficiency of the one step approach while demonstrating clear speed advantage over the autodiff approach. Strengths: Automatic differentiation of iterative system is long known for its slowness, rendering many differentiable systems useless in practice. Of course, one can re-route to implicit differentiation, also known as sensitivity analysis, adjoint method, etc. However, implementing them is not fun, especially for complex systems. The authors proposed a competitive alternative that is both easy to implement, fast to compute (figure 3), and, without any loss for accuracy (line 284). I can see it laying the foundation for many downstream applications. Like author pointed out, using just the jacobian of the last step seems like a too naive approximation. However, the author proved several crucial theorems in the paper showing that it's in fact quite accurate in many cases (of course not all the cases). The result is general in the sense that it applies to all iterative methods that have a fixed point. The paper is well written, with a good balance of text, code, and proofs, making the paper easy to follow. That said, I admit that I did not check every step of the proof carefully. Weaknesses: As a theoretical paper, I really don't see much weakness. That said, I would be interested in seeing validation and experiments on more complex engineering system, potentially involving highly non-convex neural networks. Minor: Figure 1 caption, "is explicited". explicit is not a verb. Technical Quality: 3 good Clarity: 3 good Questions for Authors: line 134 can the author please discuss more when the practioners are advised to use the K last step? in other words, K-step differentiation of iterative algorithm. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The paper lacks a discussion of potential limitations or scenarios where the one-step differentiation method may not be suitable or may exhibit suboptimal performance. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the referee for his positive feedback. Experiments and validation on highly nonconvex neural network is something that we are currently investigating. From our intuition so far (and as predicted by the theory we developed), one-step differentiation is not a panacea that will perfectly work under all circumstances, but it may have some important benefits for specific architectures under specific settings. We do not have solid enough findings in this direction to include in the present work, but we will for sure invest efforts in this direction. Of course, we will correct the typo and include a discussion regarding situations for which using K-last steps differentiation is advisable. This will also provide an answer to the reviewer's last: the typical scenario for which one-step differentiation may fail is for slow algorithms, which corresponds to a contraction factor close to 1 (or even equal to 1, in this case, even implicit differentiation has no guarantee). This is actually the reason why we propose to differentiate K steps instead of a single step in this situation. This will be more properly discussed in the numerical and conclusion sections. --- Rebuttal Comment 1.1: Comment: I appreciate the additional discussions on k steps. Please add the neural network into discussion as well. With that, I look forward to having this paper accepted.
Summary: This paper develops a convergence-rate analysis for the approximation of the Jacobian under the technique of one-step differentiation. This technique is similar to that of iterative differentiation (aka unrolling, differentiating through optimization), except only the last iterate of the algorithm is used to compute the Jacobian. The authors develop a quite generic framework that contains many common algorithms as special cases. The paper also contains several examples of how to apply this framework to specific examples such as gradient descent and Newton's method. Strengths: 1. The paper is clearly written, notation is largely standard. Main results are clearly explained. Overall, this paper was a pleasure to read. 2. As far as I can tell, the results in this paper are sound. 3. Experiments are to the point and clearly illustrate the theoretical results of the paper. 4. The topic considered is an important one. The authors take a technique that has been shown to work empirically and develop a sound theory around it. Weaknesses: 1. The current framework doesn't apply to algorithms that either i) depend on past iterates, such as gradient descent with moment, or that ii) are not always contractive such as accelerated gradient descent. It could be possible to easily extend the results to ii) by considering an enlarged space where x now contains the current iterate and past information (such as momentum), although I haven't checked whether its possible to get back the bound on the Jacobian after this transformation. 2. One of the most interesting results IMO is the convergence of a bi-level scheme, but the analysis done in 3.3 seems more of an afterthought. To start, the statement is highly confusing: its presented in terms of f, but the f here is not (if I understood correctly) the f of the inner optimization. Instead, it should be understood as any f that is L Lipschitz. In fact, (and this is nowhere written), to obtain a bound on the suboptimality of the bilevel problem, we should take f = g. Even with this in mind, casting the result of Lemma 3 into a bound for the bilevel problem requires to make a number of substitutions. Why not state clearly the rate for the bilevel problem in term of the quantities and constants that are defined for this problem? Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: L20: I didn't understand the point of the phrase "it does not solely relies on the compositional rules of differential calculus" (and it has at least a grammatical error relies -> rely) Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: In general the paper is quite honest about its limitations. However, I suggest the authors expand a bit more on which class of algorithms verify and which don't Assumption 1 (see Weaknesses) Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the referee for his feedback. We provide below detailed answers. 1. We agree with the referee that some algorithms may appear not directly in the form of a contraction or even not directly in the iterative form given in (1). A typical example is the heavy ball method for strongly convex problems: it should be considered in the phase space (because it is a second-order algorithm), and iterations are not contractive. However, the composition of a certain number of iterations is contractive in the phase space. We will add a remark and cite this example with a bibliographic reference to illustrate it. 2. We agree that Section 3.3 needs rewriting, and we will work on it in a revised version. More precisely, we will add a corollary for the bilevel problem by combining the results of Corollary 4 and Lemma 3. Furthermore, we will modify the notations of Lemma 3 as it is not harmonized with the rest of the text (x and f do not have the same meaning as above). We prefer to keep Lemma 3 as a separate result since it applies beyond bi-level problems. --- Rebuttal Comment 1.1: Comment: I thank the authors for their answers. I keep my score unchanged.
Rebuttal 1: Rebuttal: We thank the referees for the feedback on our work. We are glad that the referees found our paper *well-written* (WGTU, G3kk, USU8, qFET) with a *clear message* (qFET). Both the theoretical analysis and our experiments seems to have been appreciated by the reviewers. We propose to include remarks and discussions following their comments. We will also modify section 3.3: use different notations for Lemma 3 and add a corollary dedicated to bilevel problems (following remarks of WGTU, G3kk and qFET). We will also include more details on the numerical experiments (implicit differentiation implementation details and random generation details, following the remarks of agCr and qFET). We provide below a more detailed response to each comment of the reviewers.
NeurIPS_2023_submissions_huggingface
2,023
Summary: This paper studies the gradient of iterative algorithms, specifically examining one-step differentiation of these algorithms. This approach replaces the complex Jacobian inverse in implicit differentiation with a simple and fast identity approximation. The work refines the theoretical approximation analysis for one-step differentiation and characterizes its efficiency under fast convergent algorithms. Numerical examples are provided to illustrate the well-foundness of the one-step estimator. Strengths: - Organization. The paper is well-structured overall. Each section has clear motivation, and there is good coherence in the transition of analysis. The presentation of the algorithms and analysis is easy to follow. - Detailed approximation analysis. The work includes the approximation analysis for different algorithms under the one-step setting. Weaknesses: - Missing references. The work [1] also discusses the theoretical aspects of the differentiation approximation of iterative processes and should be included. - Notations. The paper could benefit from more explicit notation to enhance readability. - Novelty. While the novelty of the algorithm proposal is not a major concern, it becomes a considerable issue regarding the analysis part. The concept of one-step differentiation has been proposed for a while. Although this paper does not claim novelty in the algorithm proposal and instead focuses on the theoretical aspects, the approximation analysis is not sufficiently novel from a theoretical standpoint. I would expect more insights related to optimization and generalization when applying the one-step differentiation to learning problems. [1] Zhengyang Geng, Xin-Yu Zhang, Shaojie Bai, Yisen Wang, and Zhouchen Lin. On training implicit models. Advances in Neural Information Processing Systems, 2021 Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: Please see above. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 4 excellent Contribution: 2 fair Limitations: No limitations are included. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We greatly thank the referee for his constructive comments. Regarding the weaknesses pointed out by the reviewer, we propose the following modification which we hope will address the concerns of the reviewers so that the reviewer can update his evaluation. - One-step differentiation has indeed connections with phantom gradients proposed in [1], we will definitely add and discuss the missing reference. - We will sharpen the notations throughout the text and propose a deep rewriting of Section 3.3 to avoid notational confusion. - Following the reviewer's comment, as well as the comments of other reviewers, we will rework Section 3.3 and add a corollary providing explicit optimization guarantees for the bilevel problem. This will combine Corollary 4 and Lemma 3. This constitutes a specific instantiation of our results for learning problems which are formulated as bilevel problems with applications in hyper-parameter tuning or meta-learning. --- Rebuttal Comment 1.1: Title: Reply to Authors Comment: Thank you for your clarification! I'd keep positive toward acceptance.
null
null
null
null
null
null
Fast Scalable and Accurate Discovery of DAGs Using the Best Order Score Search and Grow Shrink Trees
Accept (poster)
Summary: This paper introduces the best order score search (BOSS) algorithm and grow-shrink trees (GSTs) for learning directed acyclic graphs (DAGs) in machine learning and causal discovery. BOSS achieves state-of-the-art performance in accuracy and execution time, making it a valuable tool for problems with highly connected variables. The paper also applies BOSS to resting-state fMRI data, demonstrating its practicality and effectiveness. The algorithm is designed to be a more efficient alternative to the existing Greedy Sparsest Permutation (GRaSP) algorithm. Strengths: - The paper introduces the concept of Grow-Shrink Trees (GSTs), which are tree data structures for caching the results of the grow and shrink subroutines of GS. This data structure is compatible with many permutation-based structure learning algorithms, including BOSS and GRaSP. The use of GSTs is a novel approach that efficiently stores information needed for running GS. - The paper is well-written and clear. It provides detailed explanations and examples, making it easy to understand the proposed algorithm and its components. The use of figures and tables also helps to illustrate the concepts and results. - The pape compares BOSS against other algorithms such as GRaSP, fGES, PC, and DAGMA on linear Gaussian data generated from Erd˝os-Rényi and scale-free networks. The results show that BOSS maintains a high level of accuracy while scaling much better than GRaSP, indcating that it could be a valuable tool in the field of structure learning. Weaknesses: - One potential weakness is the lack of discussion on the limitations of the BOSS algorithm and GSTs. While the paper acknowledges that there is room for additional improvements, it does not provide a detailed analysis of the limitations of the proposed approach. For example, the paper could discuss the assumptions made by BOSS, such as causal sufficiency, and how these assumptions may affect the accuracy and scalability of the algorithm. Additionally, the paper could explore the applicability of BOSS to other types of data beyond fMRI, such as financial data or electronic health records. Addressing these limitations would provide a more comprehensive understanding of the strengths and weaknesses of the proposed approach. - Validation on more Real-World Data: The validation of the algorithm on fMRI data is a good step, but more extensive validation on diverse real-world datasets could strengthen the robustness of the findings. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - How does the BOSS algorithm handle the problem of confounding variables in fMRI data, and how does it ensure that the recovered causal networks are not biased by these variables? - Would there be non-linear causal relationships in fMRI data, and how does it ensure that the recovered causal networks capture these relationships accurately? - How does the performance of the BOSS algorithm vary with the level of sparsity in the underlying causal graph in fMRI data? - Would the performance of the model vary with the level of noise in data? - Discussion on Limitations: Could you include a section discussing the limitations of your work and potential areas for future research? - Results Interpretation: Could you provide more insight into how you interpreted your results and why you drew the conclusions you did? This would help to understand the implications of your findings. Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: - The authors have discussed the limitations of their work in the context of the performance of the BOSS algorithm. They have compared it with other algorithms like GRaSP, fGES, PC, and DAGMA on linear Gaussian data and have also tested it on non-Gaussian data. They have acknowledged that while BOSS performs well in terms of BIC score and running time, its recall may be low in certain scenarios. Question: You have mentioned that BOSS has a low recall in certain scenarios. Could you elaborate on these scenarios? How could this limitation be addressed in future work? - It would be beneficial to include a section in your paper discussing the ethical implications of your work. This could include potential misuse of your algorithm, biases in the data that could affect the algorithm's performance, and the implications of incorrect predictions by the algorithm. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Apologies for truncating a few of your comments, we ran out of characters for our rebuttal. **One potential weakness is...** Thanks for your comments and criticisms; we have found them helpful and constructive! See our responses to your points below. **Validation on more Real-World Data: The validation of the algorithm on fMRI data is a good step, but more extensive validation on diverse real-world datasets could strengthen the robustness of the findings.** We hope to apply this algorithm to many real-world datasets in the future; however, we lack the space to include multiple examples in the current paper. Regarding the choice of real-world application, fMRI was used due to the low recall of other methods in highly connected systems. The high recall of BOSS and GRaSP in these problems is not trivial, and GRaSP could not scale to the number of variables in most parcellations of fMRI data before this paper. We also think it is worth noting that the fMRI application included in the paper ran the BOSS algorithm on 171 datasets with 379 columns and 520 rows, so the included application was not a small undertaking. **How does the BOSS algorithm handle the problem of confounding variables in fMRI data, and how does it ensure that the recovered causal networks are not biased by these variables?** Thanks for the question; we refer you to our responses to the other reviewers on this point. In summary, we expect to find extra edges, but they can be removed using post-processing techniques which we leave for future work. **Would there be non-linear causal relationships in fMRI data, and how does it ensure that the recovered causal networks capture these relationships accurately?** In general, fMRI data look fairly linear; however, it should be noted that the linearity assumption is made for the Gaussian BIC score and not for the BOSS algorithm itself. The Gaussian BIC score can be swapped out for another consistent score that models non-linearities in the data, and correctness will hold. However, it should be noted that such scores are often much slower, so the algorithm will not scale as well in this case. Lastly, we note that the dominant technique used to analyze function connectivity in fMRI data is to threshold the sample correlation matrix, which also does not handle non-linearity. **How does the performance of the BOSS algorithm vary with the level of sparsity in the underlying causal graph in fMRI data?** It is hard to analyze how BOSS deals with varying levels of sparsity in fMRI data since the true underlying structure is unknown. That being said, we believe the results that look at varying levels of sparsity for the scale-free linear cases should give an idea. Indeed, the method by which we simulated the scale-free aspect of the graph was inspired by the fMRI data. Figures 3 and 4 show the performance of BOSS (and other algorithms) as a function of the average degree of the underlying model. For the range of parameters we considered, the performance of BOSS (precision and recall for adjacencies and orientations) was essentially unaffected by variation in sparsity. **Would the model's performance vary with the noise level in the data?** If the noise is increased in the exogenous noise terms, it does not appear to have a substantial effect; we’ve tested this for a wide range of these parameters (though there wasn’t room to include a formal result for this in the paper). If the noise is a large additional measurement noise, we have not tested this and plan to in future work. We have added this comment to the paper. Small added Gaussian measurement noise has been unproblematic in our testing. **Discussion on Limitations: Could you include a section discussing the limitations of your work and potential areas for future research?** We have collected up the various comments regarding the limitations of the work and potential areas of future research and will include them in a new section. **Results Interpretation: Could you provide more insight into how you interpreted your results and why you drew the conclusions you did? This would help to understand the implications of your findings.** The results on the fMRI data is mainly included as a proof of concept. We hoped it would demonstrate that this algorithm can be applied to such data and the comments regarding the scale-free nature of the learned output are mainly to note that we are learning something that is consistent with the accepted nature of brain connectivity. We are currently working on several applied projects using this algorithm, where we plan to dive deeply into what we have discovered using this algorithm. This paper is intended to introduce the algorithm but not dive deeply into any data analysis; we leave that to future work. **The authors have discussed the limitations of their work in the context of...** The performance of BOSS drops off in Table 2 (Figure 4b) when the penalty discount (BIC lambda) is increased. This was done for the sake of comparison to the GRaSP algorithm, which required the lower penalty discount in order to return in a reasonable amount of time (under an hour). However, there is no reason to not also report the results of the BOSS algorithm run on these data with a lower penalty discount. We expect the recall of BOSS to improve by doing so and plan to add these results to Table 2 (Figure 4b). The performance of BOSS could also degrade when its assumptions are violated, and we will include a discussion on this in the added section in limitations. **It would be beneficial...** Thanks for the suggestion, we will comment on the ethical implications of misusing / misinterpreting the results of our algorithm as you suggest with / after the added discussion of limitations. --- Rebuttal 2: Title: Response to Rebuttal Comment: Thanks for the authors for the additonal explanation. I believe adding those details can further help readers better understand the work. I would like to remain my original ratings.
Summary: It's beyond my knowledge so please disregard my scores and reviews. Strengths: It's beyond my knowledge so please disregard my scores and reviews. Weaknesses: It's beyond my knowledge so please disregard my scores and reviews. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: It's beyond my knowledge so please disregard my scores and reviews. Confidence: 1: Your assessment is an educated guess. The submission is not in your area or the submission was difficult to understand. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: It's beyond my knowledge so please disregard my scores and reviews. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your honesty!
Summary: The authors introduce two novel methods for learning Directed Acyclic Graphs (ADGs): best order score search or BOSS and grow-shrink trees or GSTs. These methods have a similar performance to state-of-the-art approaches (namely GRaSP and also others: fGES, DAGMA, LiNGAM, etc), but the authors proof that are less computationally intensive and therefore more scalable. The authors then validate their methods by learning brain networks from synthetic and real brain recordings (fMRI). Strengths: - originality: the paper introduces two novel approaches for learning DAGs. Even though this work closely follows previous studies on permutation-based structure learning, it also presents significant new departures from the previous algorithms. - quality: the theoretical proofs seem solid and the methods are backed with a reasonable amount of numerical and experimental evidence. - clarity: the paper is very concisely and clearly written. Although a lot of the information is in the supplements. - significance: learning network from brain recordings is a very useful tool to understanding the brain and its pathologies. A method that significantly increases scalability with same accuracy as the state-of-the-art is therefore noteworthy. Weaknesses: - Validation on real fMRI is (somewhat understandably) weak because the scale free connectivity of the brain remains a theory rather than a proven fact. - Many details of the work are left out from the main paper and included in the supplemental material. It is almost hard to understand the main paper without reading the supplements. Technical Quality: 3 good Clarity: 3 good Questions for Authors: No questions. Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Authors have adequately addressed the limitations of their work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Validation on real fMRI is (somewhat understandably) weak because the scale-free connectivity of the brain remains a theory rather than a proven fact.** Thank you for your comment, we will add references (a few relevant references are listed below) to bolster this particular point. These references give more recent, strong positive support for this position. Lynn, C. W., & Bassett, D. S. (2019). The physics of brain network structure, function and control. Nature Reviews Physics, 1(5), 318-332. Galinsky, V. L., & Frank, L. R. (2017). A unified theory of neuro-MRI data shows scale-free nature of connectivity modes. Neural computation, 29(6), 1441-1467. Mansour L, S., Di Biase, M. A., Smith, R. E., Zalesky, A., & Seguin, C. (2023). Connectomes for 40,000 UK Biobank participants: A multi-modal, multi-scale brain network resource. bioRxiv, 2023-03. Hanson, S. J., Mastrovito, D., Hanson, C., Ramsey, J., & Glymour, C. (2016). Scale-free exponents of resting state provide a biomarker for typical and atypical brain activity. arXiv preprint arXiv:1605.09282. Zhang, A., Fang, J., Liang, F., Calhoun, V. D., & Wang, Y. P. (2018). Aberrant brain connectivity in schizophrenia detected via a fast gaussian graphical model. IEEE journal of biomedical and health informatics, 23(4), 1479-1489. Grosu, G. F., Hopp, A. V., Moca, V. V., Bârzan, H., Ciuparu, A., Ercsey-Ravasz, M., ... & Mureșan, R. C. (2023). The fractal brain: scale-invariance in structure and dynamics. Cerebral Cortex, 33(8), 4574-4605. **Many details of the work are left out of the main paper and included in the supplemental material. It is almost hard to understand the main paper without reading the supplements.** Thank you for this comment, the supplement contains details about: (1) how the data were simulated, (2) the parameters chosen for the various algorithms, and (3) tables giving the exact values of the results that are depicted as plots in the main paper. We included these details in the supplement for readers who wish to delve more into the details of our experiments, and we do not feel that the paper is incomplete without them. Could you be more precise about which aspect of the supplement you think should be included in the main paper?
Summary: The paper proposes a computationally efficient algorithm using the grow-shrink trees to iterate through some of the combinatoric number of directed acyclic graphs in a graphical model. The approach builds on early work but is faster. Strengths: The paper is straightforward, easy to read (examples are given), and focused in its presentation. The results show that it is faster than the baseline GRaSP. Weaknesses: The paper's focus on the approach also means there is not much perspective given until the final discussion. The paper's claim of scaling to thousands is not thoroughly tested (results for 1000 appear to take 500 s). It is not clear what is necessary to apply the approach to higher-resolution parcellations. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. What is the computational complexity of the algorithm? 2. Proposition 4 states that the initial permutation doesn't matter if some conditions are met. In practice, if these conditions are unmet or unknown, does the initial permutation matter? Confidence: 1: Your assessment is an educated guess. The submission is not in your area or the submission was difficult to understand. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: Limitations are discussed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **The paper's focus on the approach also means little perspective is given until the final discussion.** We will add more details to / rework the introduction to help readers understand our perspective. In order to help us do this, could you describe what perspective you gained in the final discussion that was missing in the introduction? **The paper's claim of scaling to thousands is not thoroughly tested (results for 1000 appear to take 500 s). It is unclear what is necessary to apply the approach to higher-resolution parcellations.** Our intention was not to claim that our algorithm can scale to thousands of variables but rather to demonstrate that we can scale to at least 1000 variables on a laptop in under 10 minutes — this was done on a single processor and could be further sped up by using multiple processors in parallel. The paper presents an algorithm that can scale to densely connected problems an order of magnitude larger than what was previously possible in terms of variables (GRaSP without the use of GSTs does not scale much past 100 variables). As far as we know, fMRI parcellations in the literature rarely have more than 1000 variables. **What is the computational complexity of the algorithm?** We will add a discussion about the algorithm’s complexity and running time. In the worst case, the main loop will be performed no more than n times, where n is the number of variables. Although, in our experiments, BOSS usually only repeated the main loop two or three times. Each main loop requires 3n^2 runs of the grow shrink subroutine, so BOSS will make O(n^3) calls to the GSTs. While profiling our algorithm, we found that it spends nearly all of its execution time performing such GST calls. These calls are cached so redundant regressions that need to be performed while running similar grow shrink subroutines can simply be looked up rather than recalculated. Indeed, one can show that once a regression is performed, it will never need to be run again. **Proposition 4 states that the initial permutation doesn't matter if some conditions are met. In practice, does the initial permutation matter if these conditions are unmet or unknown?** In general, it is possible to run the entire algorithm multiple times from different initial permutations and take the result that yields the most optimal BIC score; our software has a parameter “number of runs” that allows the user to do this. In our simulations, when the ground truth is a DAG, there is little to no substantial advantage to consider multiple starting points. However, assuming that the ground truth is a DAG is admittedly a bad assumption for most real world datasets. In the causally insufficient case, regardless of the initial permutation, we can expect to learn a few extra edges. This can be corrected if the user post-processes the learned graph with the FCI algorithm, like what is done in the GFCI algorithm; we leave this investigation to future work. Also, relative to confounding and fMRI, together with the whole-brain coverage, I would add something about the preprocessing that was used for this real fMRI data, in particular, what is commonly referred to as nuisance regression, which implies regressing the bold signal out of movements, trends, white matter, and csf artifacts, etc. Both strategies strongly mitigate the effect of possible confounders. --- Rebuttal Comment 1.1: Title: I've read the rebuttal. Comment: **The paper's focus on the approach also means little perspective is given until the final discussion.** In the discussion the assumptions and limitations are stated. In the discussion the family of graphs (ER and scale-free) underlying the DAG for the synthetic are mentioned. **The paper's claim of scaling** Although parcellations with more parcel's aren't common, couldn't arbitrarily small parcels be created down to the individual voxels through various techniques? My point is that it should be clarified how scalable the proposed techniques are, even if real data is not readily available. **What is the computational complexity** Thanks. **Proposition 4** The option to run it multiple times and the motivation (not assuming a DAG) should be clearly stated in the method section, not just the software, as should be the inclusion of fast causal inference as post-processing. --- Reply to Comment 1.1.1: Title: Thanks for the comments! Let me elaborate a bit... Comment: Nice--I'm one of the other authors; the main author gave me permission to respond, haha, he will regret it... :-) Mentioning scale-freeness in the intro is a great idea. There is reasonable consensus among many now that fMRI connectivity is scale-free. GRaSP and BOSS deal with this sort of data very nicely, far better than the usual causal search algorithms, even out to an average degree of 20, without any compromise in quality. Also, we plan to rework the introduction in response to various comments. We're thinking maybe the "wow" factor of doing graphs with an average degree of 20 perhaps didn't come through well. Nearly all papers we've seen in Neurips and elsewhere deal only with very sparse graphs, average degree 2 or 4, or 6, which for a large number of variables is sparse in the extreme. We wanted to emphasize that our method could handle a much higher average degree without sacrificing accuracy (as almost all algorithms do, to the best of our knowledge) and responds well to many people's worries about doing causal searches accurately with dense graphs. DirectLiNGAM does reasonably well for dense graphs, but only under the stronger assumption of linear (strong) non-Gaussianity, and the empirical fMRI data we use are quite Gaussian, so it is not surprising (see Table 2) in our paper that DirectLiNGAM performs at chance levels for this sort of data. (Compare this to DAGMA in our charts, for instance, another Gaussian method published recently in Neurips and the best-performing continuous optimization algorithm we could find, so this is a comparison to that whole approach, so far as we currently know.) I was actually very excited to see the comment about scalability. For this paper, the state of the art for Gaussian searches for dense models is currently GRasP, though the published GRaSP scales only to about 100 variables, and in that paper, the maximum average degree tested was 10. In this paper, using a different permutation algorithm, we've increased the number of variables we can comfortably analyze 10-fold and the average degree we can analyze 2-fold without sacrificing accuracy. This sets a new state of the art for this problem, so far as we know. Notice also that even DirectLiNGAM, under a stronger assumption, takes much longer to run on this problem without appreciably better results; BOSS is able to match and even sometimes improve on the performance of DirectLiNGAM even for the linear, non-Gaussian case, which is helpful since even if you know that some variables are non-Gaussian you may not know that all variables are non-Gaussian. What excited me was that you wanted to extend the analysis to voxel-level analysis, which would to at least 40,000 variables. Genetic data also scales into that general dimension if one includes protein data in the analysis. We are keen to scale methods to that dimension for dense, scale-free data and are currently working through the various issues with parallelization, etc. However, there is no room in the current paper to address those optimizations; we are at the page limit. We plan to write another paper that does. It goes without saying that with good parallelization (not easy, key steps not embarrassingly parallel), we could jump from doing these analyses on laptops in a single thread to many-core machines. Also, we will definitely include the multiple-runs option in the methods section, thanks, and mention that for small numbers of variables (not so much for massive models), it can help. The idea of post-processing the BOSS output with a GFCI wrapper was proposed in the GRaSP paper but not explored, and we are hesitant to include any specific proposals on that before we've worked through the thorny issues of scaling a latent variable search up to the size problems that we are interested in. No one can do that now, even remotely, but we will try. We do have implementations of the procedure that people can try if they wish to see how well it does before such optimization. Using BOSS as an initial step certainly improves accuracy for these PAG methods. Regarding not being abreast of the permutation search literature, for one thing, GSTs are completely novel as of this paper, as is the BOSS algorithm, so we weren't expecting anyone to be familiar with those, no worries. The permutation methods, though, are very promising, as they currently outperform DAG (CPDAG) search over all other methods we know of for the linear Gaussian case by a long shot and match the best-performing algorithms for the linear non-Gaussian case. We're hoping to raise awareness of this methodology--i.e., hoping people will go back and read the Raskutti and Uhler paper, the Solus et al. paper, and the Lam et al. GRaSP paper and various other papers that have been published using permutations of variables orders as a key step.
Rebuttal 1: Rebuttal: We thank all reviewers for your comments! We noticed that none of the reviewers were especially confident in their reviews (none had confidence greater than 2); perhaps our responses can help in this regard. The reviewers did not have many comments regarding the technicalities of the algorithm or data structure presented in the paper. There were, however, some areas where the presentation can be improved, and it’s clear from the comments that some context is needed to tell the reader what the contributions of the paper are to the practical issue of analyzing large-scale data accurately from a causal point of view. We will address comments that ask for clarification or more information and add text in the paper to address these comments. In general we plan to: - add a discussion of future work, limitations, and ethical considerations - add a discussion about complexity of the algorithm - add more details to the figure that demonstrates BOSS's scalability - clarify that GRaSP is using GSTs — without GSTs, GRaSP does not scale much past 100 variables - add comments about Direct LiNGAM in the Gaussian case - adjust language for clarity where necessary and correct any typos
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
A Simple Solution for Offline Imitation from Observations and Examples with Possibly Incomplete Trajectories
Accept (poster)
Summary: This paper introduces Trajectory-Aware Imitation Learning from Observations (TAILO) for offline imitation learning. TAILO tackles the problem of learning from incomplete trajectories, where other state-of-the-art (SOTA) methods fail. Specifically, TAILO proposes a simple yet effective solution. It first learns to recover the task reward function by distinguishing the expert demonstrations and the sub-optimal data. In addition, TAILO treats the non-expert data as an unlabeled dataset, and relabelles the reward of sub-optimal transitions to provide more training data for the policy. Next, TAILO performs policy learning by weighted behavior cloning with accumulative return along a successful trajectory segment. In the experiments, TAILO successfully outperforms a series of baselines on offline imitation learning with incomplete trajectories. Strengths: - The paper studies an interesting problem with potential real-world applications. - The empirical performance looks promising. Weaknesses: There are several main weaknesses of the paper. Firstly, the presentation of the paper still has room for improvement. Specifically, I would suggest the authors double check the literature and be careful when making statements. There are several important but unfortunately erroneous statements in the paper. For example, the very first sentence in the abstract: > Offline imitation from observations aims to solve MDPs where only task-specific expert states and task-agnostic non-expert state-action pairs are available. This statement is inaccurate as by its definition, offline imitation learning discusses the case where a policy is learned with only off-policy data, excluding the on-policy environment interactions. Consider the standard behavior cloning, for example. The use of non-expert transitions is just one of the solutions to assist policy learning, as in ValueDICE [1], DemoDICE [2], and etc. Also, in line 113, the authors state that there are three main steps for DICE methods: > DICE methods consist of three parts: reward generation, optimization of the 114 value function V(s), and weighted behavior cloning. This is inaccurate as well. For example, ValueDICE [1] directly learns the value function, rather than the rewards. Besides the presentation issues, the contribution of the paper is relatively minor. The use of cumulative return / advantage function is standard in reinforcement learning literature. Mathematically, only by maximizing the cumulative return would give a correct policy improvement step. If we consider the specific form of weighted behavior cloning, or more precisely, advantage weighted regression, we can still find many existing works with similar structure [3, 4, 5]. There are some potential issues with the mathematical correctness and experiment details, which I will reserve for the questions. References: [1] Kostrikov, I., Nachum, O., & Tompson, J. (2020). Imitation learning via off-policy distribution matching. In International Conference on Learning Representations. [2] Kim, G. H., Seo, S., Lee, J., Jeon, W., Hwang, H., Yang, H., & Kim, K. E. (2022, October). Demodice: Offline imitation learning with supplementary imperfect demonstrations. In International Conference on Learning Representations. [3] Abdolmaleki, A., Springenberg, J. T., Tassa, Y., Munos, R., Heess, N., & Riedmiller, M. (2018). Maximum a posteriori policy optimisation. arXiv preprint arXiv:1806.06920. [4] Peng, X. B., Kumar, A., Zhang, G., & Levine, S. (2019). Advantage-weighted regression: Simple and scalable off-policy reinforcement learning. arXiv preprint arXiv:1910.00177. [5] Wang, Z., Novikov, A., Zolna, K., Merel, J. S., Springenberg, J. T., Reed, S. E., ... & de Freitas, N. (2020). Critic regularized regression. Advances in Neural Information Processing Systems, 33, 7768-7778. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: - In Sect. 3.2, the authors propose to learn two discriminators to label the non-expert trajectories and learn the reward function in an inverse reinforcement learning manner. However, it is common that solving a min-max game is naturally unstable and hard to optimize. Coupling two min-max objectives would even make it worse in my opinion. Are there any tricks to improve the stability? What are the failure cases of such an optimization scheme? - In Eqn. 6, the authors propose to weight the log-likelihood by $\sum\limits_{j=0} \gamma^j \exp(\alpha R(s_{i+j}))$. This seems a bit odd to me. If we treat $R(s)$ as the exact reward function and solve the policy improvement by RL as inference, we will end up with exactly the AWR [1] objective, which is $\exp(Q(s, a) - V(s)) = \exp(\sum\limits_{j=0}\gamma^j R(s_{i+j}) - V(s_i))$ for the weight. Thus, I’m wondering if there is a detailed derivation of this objective in Eqn. 6? - In Fig. 2, the results look quite interesting and promising. One thing I’m curious about is that SMODICE actually performs worse in some cases with less data removed. For example, SMODICE completely fails on Walker2d_1/20, 1/10, 1/5, but achieves a fairly strong performance on Walker2d_1/3. Could you provide some additional explanations on this? - The experiments mainly focus on continuous control tasks, where the agent has repeating cyclic patterns for its actions. With this in mind, it is a bit unclear to me why incomplete trajectories would be an issue, if the dropped trajectory segments are repeating the remaining ones. In this case, does the performance gain come from the use of unlabeled data or the better Q-value approximation? References: [1] Peng, X. B., Kumar, A., Zhang, G., & Levine, S. (2019). Advantage-weighted regression: Simple and scalable off-policy reinforcement learning. arXiv preprint arXiv:1910.00177. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: As discussed above, this paper has certain limitations regarding its presentation, novelty, the mathematical correctness, and its novelty. I do appreciate the authors’ efforts on the extensive experiments and detailed appendix. However, I would think the paper is not ready for publication yet and major revision is needed for it to be published in another venue. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for valuable feedback. ### Q1. Inaccurate statements. - Offline imitation from observations aims to solve MDPs where only task-specific expert states and task-agnostic non-expert state-action pairs are available. This is correct. Note, we are discussing offline **imitation (Learning) from Observations (LfO)**, not **Imitation Learning (IL), i.e., no expert actions.** ValueDICE [20], DemoDICE [21] and Behavior Cloning (BC) all require expert actions missing in LfO. With only expert states and no interactions, the **Task-Agnostic (TA)** dataset is the only source for MDP dynamics. - (As mentioned in Sec. 2,) DICE methods consist of three parts: reward generation, optimization of the value function V(s), and weighted behavior cloning. **1. This sentence needs to be put in the paper’s context.** As we write right before, “As mentioned in Sec. 2”, the “DICE methods” here refers to those discussed in Sec. 2. We only mention SMODICE [1] and LobsDICE [2] in Sec. 2, and they both satisfy the statement. **2. ValueDICE requires expert actions; it is not for LfO discussed in the paper.** One can adapt ValueDICE to the LfO setting as shown by OPOLO [22]. But such a baseline is improved upon by OPOLO, which is in turn outperformed by our baseline LobsDICE. 3. To prevent confusion, we will change the sentence to “the DICE methods for offline LfO discussed above”. However, we think the message within the context has no factual error. ### Q2. Limited contribution beyond Advantage Weighted Regression (AWR) style [23] methods. **1. Our method is not a variant of AWR.** Our goal is to learn more from expert trajectories in TA datasets. We adopt the reward from SMODICE / LobsDICE and improve with Positive-Unlabeled (PU) learning. We adopt the pessimism principle in offline RL by giving small weights to non-expert data, and use the exp function for thresholding. We sum over future returns, as the policy does not depend on the past in an MDP. The discount factor balances expert experience propagation (line 56-58) and avoidance of non-expert segments in the middle. AWR is totally different: it iteratively maximizes the (approximated) advantage from policy in the last iteration with divergence constraint. **2. The AWR methods listed in the review learns a value function, which we try to avoid.** We verify empirically that the value function is hard to obtain in our settings, e.g. missing steps in TA datasets. Also, **consistent with prior work [8], MARWIL [7] (single-iteration AWR) struggles in our settings.** See pdf in the global response for reward curves. 3. **We provide theoretical (Sec. C) and empirical evidence on the downsides of SOTA methods, SMODICE and LobsDICE,** which is another novelty. ### Q3. Stability issue of discriminator learning with coupling min-max objectives. Tricks? **1. The discriminator training is not a min-max game.** Only one moving part (c(s) or c’(s)) exists in Eq. 4 and 5 respectively. The max operator inside the min objective is a clipping technique for debiasing (see Sec. A for details). 2. Lipschitz regularizer is used for better stability (see Sec. F.4.7). 3. Optimization fails with extreme hyperparams or no Lipschitz regularizer. (see Sec. F.4.2, F.4.3 and F.4.7). ### Q4. Missing derivations for Eq. 6. As stated in Q2, our method is not a variant of AWR. Thus, Eq. 6 is immediately intuitive: the weight for BC is determined via a discounted sum of future returns; the exp function serves as a thresholding method upweighting expert data, as opposed to AWR’s closed-form solution. ### Q5. Abnormal SMODICE-KL behavior in ablation. **1. Collapses of SMODICE-KL come from value function divergence.** As stated in Sec. C.2.1 in the appendix, SMODICE-KL diverges with missing data in the task-agnostic trajectories. When this happens (e.g., halfcheetah_1/5), SMODICE-KL collapses; otherwise, it performs well (see pdf in the global response for visualization). **2. The smoothing effect of the Neural Network (NN) mitigates the divergence, but the strength of the effect varies.** As the data distribution differs, the smoothing effect varies across datasets. Thus, the time it takes for divergence differs. **3. With more and uniform updates, the abnormality caused by NN smoothing disappears.** As shown in Fig. 19, with larger batch size, the updates on each data point are more frequent and uniform. Thus, the trend that increased noise leads to worse performance on SMODICE-KL (thin blue line) is obvious. ### Q6. Is the performance gain coming from the use of unlabeled data or better Q-estimation, as the environment has cyclic patterns? **1. Our method performs well in non-cyclic environments.** Kitchen (Sec. 4.3, 4.4), antmaze (Sec. 4.3-4.5) and pointmaze (Sec. 4.4) are non-cyclic environments; our method works well on all of them. **2. Cyclic patterns are more beneficial to baselines.** With cyclic patterns, the states are closer to each other, and NN smoothes out the diverging terms in SMODICE / LobsDICE more easily. In non-cyclic environments, our method gets better R(s) as the expert trajectories lead to a different direction from non-expert ones in the state space, but the divergence terms in SMODICE / LobsDICE remain. 3. We are unsure about what the reviewer refers to with “the use of unlabeled data”. **a) If it is the TA dataset, then it is always crucial.** As mentioned in Q1 point 1, TA dataset is the only source for MDP dynamics in offline LfO. **b) If it is PU learning that leverages the unlabeled essence of the TA dataset, then there is indeed a performance gain. In Sec. F.6**, our PU learning is much better than ORIL’s (and no PU; see pdf in the global response) on environments like halfcheetah_mismatch. **4. Better Q-estimation is important. Sec. F.6** shows that ORIL with our reward still works badly. In fact, our motivation is to avoid learning it and choose a non-parametric but robust approach. ### References See global response. --- Rebuttal Comment 1.1: Comment: I appreciate the detailed responses by the authors. They have addressed lots of my concerns. However, I disagree with the explanation about the connection to AWR. In fact, AWR is derived from Reward-Weighted Regression (RWR) [1], which uses exactly the same form as the proposed method. However, the underlying derivation is shared. I still feel that from the theoretical side, this work is relatively weak and needs improvement. I will improve my rating to 4. References: [1] Peters, J., & Schaal, S. (2007, June). Reinforcement learning by reward-weighted regression for operational space control. In Proceedings of the 24th international conference on Machine learning (pp. 745-750). --- Reply to Comment 1.1.1: Title: Further Response to the Reviewer Comment: Thanks a lot for appreciating our response and for your valuable advice! We are glad that our response has addressed most concerns. Below are further clarifications for the remaining points: 1. Our contribution is *in the context of offline Learning from Observations (LfO).* We analyze the shortcomings of prior SOTA, and present a new, effective, yet arguably simple method. We thank the reviewer for connecting the objective of our method and RWR. However, this similarity does not hinder the novelty of our method. Note that **RWR did not invent the objective either**. The objective dates back to at least the *related payoff procedure* ([1], Sec. 11.4; [2]) which introduces an expectation-maximization (EM) procedure for RL. Since then, many works including RWR [3-6] have been built on this basis, each with their own contribution. Therefore, we think that **the novelty does not depend on using such an objective, but on how we adapt the objective to our problem setting**. 2. Based on 1, despite the resemblance of the policy objectives, the most important difference between our method and RWR is that **our method has no iterative E-step for the critic and no M-step for the actor, and cannot be derived by following RWR, or more generally EM for RL**, as EM requires sampling state-action pairs from the policy in the last iteration, which isn’t straightforward in an **offline** setting. There are two workarounds: 1) importance sampling, and 2) naively using only one iteration. The former is known to be non-robust [7] and we show in our global response that the latter (MARWIL) is ineffective. Meanwhile, non-iterative policy learning with return estimated from task-agnostic rollouts in general has been proven successful by recent RL via supervised learning works [8, 9]. 3. Even within the RWR line of work, our method still has a unique contribution for our problem of interest. RWR itself is quite different from our work: a) RWR only considers immediate rewards rather than episodic returns ([10], Sec. 7). We showed the ineffectiveness in our global response pdf ($\gamma=0$). b) RWR has an adaptive reward scaling term $u_\tau(r)$ due to the EM framework. We don’t have this term. This term is not guaranteed to preserve an optimal policy and thus is not necessarily an advantage [10]. Some follow-up RWR works are closer to ours. To our best knowledge, the most similar one is PoWER [5], the only work that 1) considers episodic return, 2) has no adaptive reward scaling term, and 3) uses a non-parametric approximation for the advantage or Q-value (see our previous reply for the importance of 3)). But still, three key differences remain: a) Most importantly, in our scenario, the reward label is missing. Our work improves the reward used in SMODICE and LobsDICE with Positive-Unlabeled (PU) learning. In contrast, PoWER (**and all RWR works**) assume the reward labels are available; b) PoWER uses a bilinearly parameterized policy, while we advocate for a more flexible MLP policy; c) PoWER works on a finite-horizon MDP, while we use a discount factor for future returns. Consider the kitchen environment in our paper with subtasks A, B, C and D. Assume we have a task-agnostic trajectory finishing A, B, C in order, while our sequence of interest is A, D, C. The discount factor prevents our method from blindly following the experience for B after A as it brings some degree of myopicness. Those differences are crucial for solving our task of interest, i.e., offline LfO. To sum up, we think our proposed and arguably simple method has merits that are valuable and are of interest to the LfO community. We genuinely appreciate your advice to strengthen our work; we will add this discussion to the revised version. Thanks a lot for your time and efforts. Please feel free to reach out with any additional comments that you may have. References: [1] G. E. Hinton. Connectionist Learning Procedures. 1989. [2] P. Dayan and G. E. Hinton. Using Expectation-Maximization for Reinforcement Learning. In Neural Computation, 1997. [3] J. Peters and S. Schaal. Reinforcement learning by reward-weighted regression for operational space control. In ICML, 2007. [4] J. Peters et al. Relative Entropy Policy Search. In AAAI, 2010. [5] J. Kober and J. Peter. Policy Search for Motor Primitives in Robotics. In NeurIPS, 2008. [6] T. Osa and M. Sugiyama. Hierarchical Policy Search via Return-Weighted Density Estimation. In AAAI, 2018. [7] O. Nachum et al. DualDICE: Behavior-Agnostic Estimation of Discounted Stationary Distribution Corrections. In NeurIPS, 2019. [8] L. Chen et al. Decision Transformer: Reinforcement Learning via Sequence Modeling. In ICML, 2022. [9] D. Brandforbrener et al. When does return-conditioned supervised learning work for offline reinforcement learning? In NeurIPS, 2022. [10] M. Strupl et al. Reward-Weighted Regression Converges to a Global Optimum. In AAAI, 2022.
Summary: This paper provides an offline imitation learning method upon a specific problem setting where task-specific expert states (observations) are restrictively available and task-agnostic non-expert state-actions are supplementarily available. In this problem setting, the authors follow the line of works using DICE-based imitation learning methods, and present TAILO, Trajectory-Aware Imitation Learning from Observations. Specifically, TAILO employs the 2-step positive-unlabeled learning scheme to have the state discriminator, and then it uses rewards calculated by the discriminator as the weight for weighted behavior cloning. This procedure in TAILO hinges on the assumption that in the pool of task-agnostic data, there exist trajectories or long segments that are closely aligned to the optimal expert trajectories for the specific target task. The authors focus on different use cases of offline imitation learning on incomplete trajectories and show the benefits of TAILO in such context for those cases through experiments. Strengths: This paper contributes to the area of offline imitation learning, particularly in scenarios where offline data conditions exhibit variability due to incomplete trajectories. The robustness of the proposed method, Trajectory-Aware Imitation Learning from Observations (TAILO), is thoroughly demonstrated in different data conditions through the experiments detailed in Section 4. These experimental conditions encompass scenarios with incomplete expert trajectories, incomplete task-agnostic trajectories, observations, and examples. The tests conducted appear to be comprehensive and the obtained results align well with the authors' focus, thereby solidifying the consistency and potential effectiveness of TAILO. Weaknesses: TAILO, as outlined in the paper, comprises two primary techniques, each of which is described in Section 3.2 and 3.3. One shortcoming of this work, however, is the absence of ablation studies that evaluate these proposed techniques separately. To clarify the effectiveness of the Positive-Unlabeled (PU) learning method, alternative PU algorithms could be implemented and compared with the proposed technique. It would also be instructive to remove some losses as detailed in Equations (4) and (5) and assess the performance impact. In a similar vein, various weighting strategies could be trialed in the context of behavior cloning, including cases where no weight is applied at all. Carrying out these ablation studies could provide a clearer understanding of the individual effectiveness and contributions of each method within the TAILO framework. Minor errors: - The legend of Figure 5 could be relocated without overlap. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Could the authors compare the recent work [1] in offline imitation learning? [1] Discriminator-Weighted Offline Imitation Learning from Suboptimal Demonstrations (ICML 2022) Could the authors introduce some specific real-world application scenarios for the problem settings of incomplete trajectories? Is TAILO the first that applies PU in the context of offline imitation learning? Could the authors further clarify the novelty of their work? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The limitation is described in the conclusion section. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for valuable feedback. ### Q1. The absence of ablation studies in Sec. 3.2 and 3.3. Part of the requested ablation studies are in **Sec. F.6.** We test the remaining ablations and summarize here. **1. The ablation of Sec. 3.2.** In Sec. F.6, we compare ours and ours-V2, where we study the effect of using the ORIL-style Positive-Unlabeled (PU) learning technique instead of ours. As Fig. 23 and 24 suggest, there are significant differences in the Antmaze and halfcheetah_mismatch environment and marginal differences in the kitchen environment. We additionally test using no PU learning at all, which fails on halfcheetah_mismatch and hopper_head50_tail50 environments (see pdf in the global response for reward curves). **2. The ablation of Sec. 3.3.** In Sec. F.6, we compare ours and ORIL-logR-V2, and ours and ours-V1. We find that the non-parametric retrieval of weights for Behavior Cloning (BC) is much better than the RL process introduced by ORIL, while ORIL is non-robust and often diverges. We also find that the log design of $R(s)=\log\frac{c(s)}{1-c(s)}$ is much better than the linear design $10c(s)$, as the former further emphasizes the reward given by states that are clearly classified as expert states. ### Q2. Alternative Positive-Unlabeled (PU) learning algorithms and removal of some losses in Eq. 4 and 5. **The requested ablation studies can be found in point 1 in Q1.** As stated in Q1, the result shows that our design of PU learning exhibits a significant improvement on certain environments like halfcheetah_mismatch and hopper_head50_tail50. ### Q3. Ablations on BC weighting strategies, including unweighted strategy. We test three different weighting strategies in the main paper and the appendix. We additionally test another two weighting strategies and summarize here (see pdf in the global response for reward curves). 1. We tested SMODICE [1], LobsDICE [2], ORIL [3] and ReCOIL [9] (**Sec. F.2**) as representatives of value-based weights (the dual variable of DICE methods can be seen as an equivalent of the value function). 2. We use plain, unweighted behavior cloning as a standard baseline in all experiments. 3. In **Sec. F.6** in the appendix, we test Ours-V1 to see whether the design of $R(s)=\log\frac{c(s)}{1-c(s)}$ instead of being proportional to the discriminator output is reasonable. 4. We additionally test OTR [10] as the representative of Wasserstein-based weights, which does not work well in case of small portions of expert data in the task-agnostic data. 5. We additionally test MARWIL [7] in the global response as the representative of advantage-based weights. It performs similarly to plain BC. This is consistent with prior work [8]. **We found TAILO to significantly outperform all five strategies.** ### Q4. Minor Errors on the legend of Fig. 5. Thanks for the advice. As this year’s NeurIPS does not allow modification of the original draft, we will modify the figure in the camera-ready revision. ### Q5. Comparison to DWBC [16] **1. DWBC is not applicable to offline Learning from Observations (LfO).** In DWBC, the input of the discriminator takes state, action and $\log\pi(a|s)$ from the task-specific and task-agnostic data as input, the latter two of which are not computable in LfO as expert actions aren’t available. Further, DWBC uses three terms in its policy loss (see Eq. 5 of DWBC), but we are unable to calculate the first two terms due to the lack of expert actions in LfO data. The adaptation of DWBC to offline LfO is beyond the scope of this work. **2. Even with extra access to expert action, DWBC is still unable to learn well from task-agnostic data with a very small portion of expert trajectories.** See pdf in the global response for reward curves. ### Q6. Real-World Applications for Incomplete Trajectories There are at least two possible applications for incomplete trajectories: **1. Learning from corrupted task-agnostic data.** In real-world scenarios, task-agnostic data are often accumulated from many different experiences, possibly with recording devices turned on and off in the middle, or even from the wild with unfamiliar sources and different formats (in which case alignment is required [17]). Therefore, it is very likely that some of the data are corrupted with a few steps missing in the middle of a long trajectory. **2. Learning from expert key frames and goals.** In robotic tasks such as navigation, it is common that a robot needs to move to another point with a few waypoints or the goal location as clues [18]; in physical simulation, we need to generate the desired agent behavior [19] by a few frames designated by the artists, as manually determining the position of every frame is prohibitively labor-intensive. In both cases, the task-specific dataset is incomplete with only examples or a few key frames. ### Q7. The novelty of using Positive-Unlabeled (PU) learning. 1. TAILO is not the first work to apply PU learning in offline Imitation Learning (IL); there is one prior work, ORIL [3], that also applies PU learning in offline IL. However, the two methods are significantly different; a thorough ablation in **Sec. F.6** shows that each difference between TAILO and ORIL leads to our performance gains. 2. The novelty of TAILO can be summarized as follows: a) a novel offline imitation from observation method that works well on a variety of problems; b) a simple and non-parametric way of determining weights for weighted behavior cloning; c) a novel way of acquiring a discriminator-based reward, inspired by positive-unlabeled learning designs from prior work, which is empirically shown to be effective. d) both theoretical (**Sec. C**) and empirical analysis on the shortcomings of state-of-the-art SMODICE and LobsDICE. ### References See global response. --- Rebuttal Comment 1.1: Comment: I would like to extend my thanks for the comprehensive response, which addresses most of the concerns I raised, particularly regarding the ablation studies. I maintain my original score of weak accept, as my viewpoint on the novelty remains largely unchanged.
Summary: This paper studies offline imitation from observations, assuming a small amount of task-specific expert states and task-agnostic non-expert state-action pairs are available. The method is to learn a discriminator to identify expert states in the task-agnostic dataset and then apply weighted behavior cloning to imitate states. Empirical results show that the proposed method TAILO outperforms state-of-the-art methods (the family of DICE), particular for datasets with incomplete trajectories. Strengths: 1) The idea behind this approach is intuitive, well-motivated, and conceptually simple. 2) Empirically, TAILO performs well in learning from datasets with incomplete trajectories, significantly better than state-of-the-art approaches. 3) The empirical evaluation is extensive. Considering different ways and hyper-parameters to modify datasets, the authors show good performance across all these datasets. It is impressive that the same set of hyper-parameters for TAILO performs well in all these experiments. Weaknesses: 1) Compared with DICE methods (and the previous state-of-the-art), the proposed method lacks a theoretical foundation. The objective functions in equations 4,5,6 look mathematically complex, but we do not have a theory to support them. 2) The ablation study is missing. Thus, the importance of each component in TAILO is unclear. See the section of Questions for more details. 3) In experiments, the datasets are mostly manipulated, different from the original dataset used in previous work. This seems not standard for benchmarking and comparing approaches. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1) About the role of c'(s) in Section 3.2, if we directly learn c(s) without the help of c'(s), how much it will hurt the performance of TAILO? 2) In lines 150-152, intuitively, the value of hyper-parameter \beta_1 should highly affect the final performance. Could you show an ablative study on this? Do you have any intuitive explanation why the same \beta_1 works well across different experiments (as mentioned in line 194)? 3) Could you please show experimental results on datasets exactly the same as the SMODICE paper? On these modified datasets, the performance of SMODICE looks poor. It either always keeps a low score or suddenly crashes. I'm wondering about the performance of TAILO on the standard benchmark datasets. It is great to clarify, given a specific dataset, how should we choose between SMODICE and TAILO? Is TAILO always the better choice? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: This work is based on the assumption that there exist expert trajectories/segments for the task of interest in the task-agnostic dataset. I'm concerned about the reliability or generalizability of this assumption. In real-world problems, when collecting demonstration dataset, there seems a small chance that this assumption holds. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for valuable feedback. ### Q1: Lack of theoretical foundation, especially for Eq. 4, 5 and 6. **1. Eq. 4 and 5 do not lack theoretical foundations.** The objective functions in Eq. 4 and 5 come from established Positive-Unlabeled (PU) learning works [24]. Eq. 4 is a binary classification objective that estimates the loss from the negative data using a mixed, unlabeled data distribution, with a max operator that limits biased estimates; Eq. 5 is a selection between Eq. 4 and a standard binary classification loss. See **Sec. A** in the appendix (starting from line 509) for a detailed derivation of Eq. 4 and 5. **2. Eq. 6 is immediately intuitive.** Eq. 6 states that the weight for Behavior Cloning (BC) is determined by the discounted future return in the trajectory from the current state; the exp function serves as a thresholding method to emphasize the importance of trajectories with high return. **3. TAILO is based on intuitive improvements of some carefully studied DICE methods and it is effective.** The idea of TAILO is that we want to follow the reward function design used in SMODICE and LobsDICE, but we do not want their non-robust weight update (see **Sec. C** for the reasons of non-robustness). Instead, we find that a weight as simple as thresholded future reward along the trajectory works better. Furthermore, we want to improve the classifier training of SMODICE and LobsDICE. As some examples are unlabeled, PU learning is introduced. ### Q2: Ablation studies are missing. All ablations mentioned in the question section were already provided in the appendix. **1. The role of c’(s).** The result of directly learning c(s) without the help of c’(s) is the “ours-V2” variant in **Sec. F.6** (see Tab. 4 in the appendix for the meaning of each variant). As Fig. 23 suggests, there is a significant gain in the Antmaze environment and a marginal gain in the kitchen environment with c’(s). **2. The effect of $\beta_1$.** The ablation study is in **Sec. F.4.2**. According to Fig. 14, The reward will only decrease with extreme selection of $\beta_1$. Intuitively, there are two reasons why the same $\beta_1$ works well for different experiments: 1) the ratio of expert trajectories in our task-agnostic dataset is small (<5%), thus there is little expert data erroneously classified as safe negatives; 2) the Lipschitz-smoothed discriminator yields a good classification margin, and thus the 60%+ trajectories with lower average reward represents all non-expert data well. **3. Experimental results on datasets identical to those in the SMODICE paper.** The result is in **Sec. F.5.** As Fig. 22 suggests, our method is still better. Note SMODICE works well in their paper because they report the better result among using KL and chi-squared divergence (and in Sec. E.2. of the SMODICE appendix one can see the divergence chosen is crucial), while we report them separately. On all the datasets that SMODICE was tested on, TAILO improves, as TAILO is at least comparable to the better variant of SMODICE, while removing the need of selecting a divergence. ### Q3. Using manipulated dataset from prior work is not standard for benchmarking. **1. We also improve on standard benchmarks.** As stated in Q2 point 3, our method also improves on standard benchmarks. We also compare to offline RL methods on standard D4RL datasets (Q2 of reviewer iGFb), where the other methods have extra access to ground-truth reward labels. Nonetheless, the proposed approach performs well. **2. The original benchmark is close to being solved and cannot reflect the challenges discussed in our work, e.g., incomplete trajectories.** Thus, we construct new benchmarks to show that our method has even larger benefits against those challenges, and We think that researchers should go beyond the standard benchmarks. **3. Our comparison is fair as our testbed covers that of SMODICE’s.** SMODICE tests standard imitation from observation (Sec. F.5 in our work), imitation from examples (Sec. 4.4), and learning from mismatched dynamics (Sec. 4.5). We test TAILO on **every environment in every setting with exactly the same condition** and find consistent, significant gains. We further prove that our method is robust to incomplete trajectories (Sec. 4.2, 4.3) and task-agnostic datasets with fewer expert data (Sec. 4.1). **4. Our modification for demonstration of advantages is minimal.** Except for mismatched dynamics where we strictly follow SMODICE, we do not make any modification to the environment itself; we only modify the dataset. ### Q4. The assumption that the task-agnostic dataset contains expert segments does not hold in real-world problems. **1. The assumption is common in prior works.** This assumption is the basis of prior works in the skill-based learning community, e.g., SPiRL [12], SKiLD [13] and FIST [14], where they assume that there are many action sequences from the task-agnostic data that can be utilized for the task of interest. Practically, the benchmarks of many recent works in offline imitation learning satisfy the property in the assumption (note mujoco medium and medium-replay data also contain expert trajectories, as shown in Fig. 4 of OTR [10] and Fig. 4 of decision transformer [6]), such as SMODICE [1], LobsDICE [2], OTR [10], ReCOIL [9], MAHALO [15], and decision transformer [6]. **2. The assumption has real-world applications.** For example, in the robotics community, the robot often needs to utilize overlapping skills, i.e., trajectory segments observed in other tasks, such as moving the robotic arm to a particular place and grabbing the item, to complete the current task. This is demonstrated in our kitchen environment, where the robotic arm needs to combine different skills (e.g. opening the cabinet, moving the kettle) to complete a particular procedure of interest. ### References See global response. --- Rebuttal Comment 1.1: Title: Thanks for the detailed response Comment: Thank you for the detailed response to resolve my concerns. About the theory, it will be more clear to include such an explanation in the main text to help understand the equations. However, the theory behind Equations 4,5,6 was all from previous work, which is not the contribution of this submission. It is great to see more extensive experiments in the appendix about the ablation study. The response resolves most of my problem, but the contributions are not strong enough (especially the theoretical part). So I change the rating to just slightly lean toward acceptance. --- Reply to Comment 1.1.1: Title: Further Response to Reviewer d8U7 Comment: Thanks a lot for appreciating our response and for your valuable advice! We will follow your suggestion and include the explanation in the main text. Below are further clarifications to the reviewer’s response: **Q1. The theory behind Eq. 4, 5, 6 was all from prior work and thus is not the contribution.** While Eq. 4 is from prior work, Eq. 5 is a new and important combination of Eq. 4 and binary classification. More critically, our two-step training paradigm of positive-unlabeled (PU) learning that utilizes Eq. 4 and 5 is novel for offline imitation learning. We showed its success in the appendix **Sec. F.6**: directly using Eq. 4 does not work on environments like halfcheetah-mismatch, while our two-step PU learning works well (normalized reward of 60 (ours) vs 0 (one-step PU); illustrated in **Fig. 6**). Also, in our latest response to Reviewer Re2g, we discussed in detail the unique contribution of our work to the RWR [1] line of works which use an objective similar to Eq. 6. Below is a brief summary of the differences: 1. Prior RWR works that use the policy objective of Eq. 6 assume reward labels are available, which differs from our work that combines the objective with reward from PU learning. 2. Prior RWR works that use the policy objective of Eq. 6 are theoretically derived from the Expectation-Maximization (EM) framework, while our work isn’t. Straightforward adaptation, such as MARWIL, is shown to be ineffective for our setting. 3. Our design choices, e.g., non-parametric return estimation, differs from each of the prior RWR works. **Q2. The theoretical contribution is not strong enough.** We believe that our contribution is significant for offline Learning from Observations (LfO), because we 1) analyze *both theoretically and empirically* the shortcomings of prior SOTA in LfO, and 2) propose a solution that is new, effective, yet arguably simple and leads to empirical success in a variety of scenarios. We believe that the significance of our contributions in LfO should not be underestimated, due to the simplicity and resemblance of the formulation of our solution to prior works proposed in different contexts or tasks. References: [1] J. Peters and S. Schaal. Reinforcement learning by reward-weighted regression for operational space control. In ICML, 2007.
Summary: The authors propose TAILO, Trajectory-Aware Imitation Learning from Observations, a method to solve MDPs from offline data in the form of task-specific expert states and task-agnostic state-action trajectories. The method addresses the instabilities of algorithms like DICE, takes the context of a trajectory into account even when it may be incomplete, and demonstrate across many Mujoco environments that TAILO outperforms baselines especially with incomplete trajectories. Strengths: + [Clarity] The paper is well-written, organized, and easy to follow. + [Clarity] The authors include relevant background about DICE and document baselines clearly. + [Originality] The authors propose a relatively simple solution, which is novel to the best of my knowledge. + [Quality] Experiments on several Mujoco tasks demonstrate compelling performance. The authors additionally report results on the Franka kitchen environment. Weaknesses: - Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: - Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: - Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for appreciating our work and are excited about the positive feedback. Thanks a lot.
Rebuttal 1: Rebuttal: We genuinely thank the reviewers for their valuable opinions and advice. We are delighted to see that the reviewers have carefully evaluated our work, given many valuable feedbacks, and highlight that 1) we study the DICE family of algorithms in offline LfO, pointing out that they suffer from missing data due to inaccurate value estimation or sparsity of observations which causes undesired monotonicity (cited from reviewer iGFb); 2) the solution proposed in our paper is simple and effective, and 3) we have extensive empirical evaluations and a detailed appendix, which thoroughly demonstrate that our method improves upon baselines in many different scenarios. We respond to questions in the individual replies. We also include a pdf with additional figures. Finally, we use the following references in our replies: [1] Y. J. Ma et al. Versatile Offline Imitation from Observations and Examples via Regularized State-Occupancy Matching. In ICML, 2022. [2] G. H. Kim. et al. LobsDICE: Offline Learning from Observation via Stationary Distribution Correction Estimation. In NeurIPS, 2022. [3] K. Zolna et al. Offline Learning from Demonstrations and Unlabeled Experience. In Offline Reinforcement Learning Workshop at NeurIPS, 2020. [4] R. Kidambi et al. MOReL: Model-based Offline Reinforcement Learning. In NeurIPS, 2020. [5] T. Yu et al. MOPO: Model-based Offline Policy Optimization. In NeurIPS, 2020. [6] L. Chen et al. Decision Transformer: Reinforcement Learning via Sequence Modeling. In ICML, 2022. [7] Q. Wang et al. Exponentially Weighted Imitation Learning for Batched Historical Data. In NeurIPS, 2018. [8] X. Chen et al. BAIL: Best-Action Imitation Learning for Batch Deep Reinforcement Learning. In NeurIPS, 2020. [9] H. S. Sikchi et al. Imitation from Arbitrary Experience: A Dual Unification of Reinforcement and Imitation Learning Methods. In ArXiv, 2023. [10] Y. Luo et al. Optimal Transport for Offline Imitation Learning. In ICLR, 2023. [11] S. Höfer et al. Sim2Real in Robotics and Automation: Applications and Challenges. in IEEE Transactions on Automation Science and Engineering, 2021. [12] K. Pertsch et al. Accelerating Reinforcement Learning with Learned Skill Priors. In CoRL, 2020. [13] K. Pertsch et al. Demonstration-guided reinforcement learning with learned skills. In CoRL, 2021. [14] K. Hakhamaneshi et al. Hierarchical few-shot imitation with skill transition models. In ICLR, 2022. [15] A. Li et al. MAHALO: Unifying Offline Reinforcement Learning and Imitation Learning from Observations. In ICML, 2023. [16] H. Xu et al. Discriminator-Weighted Offline Imitation Learning from Suboptimal Demonstrations. In ICML, 2022. [17] M. Chang et al. Semantic Visual Navigation by Watching YouTube Videos. In NeurIPS, 2020. [18] D. S. Chaplot et al. Neural Topological SLAM for Visual Navigation. In CVPR, 2020. [19] X. Peng et al. DeepMimic: Example-Guided Deep Reinforcement Learning of Physics-Based Character Skills. In ACM Transactions on Graphics, 2018. [20] I. Kostrikov et al. Imitation learning via off-policy distribution matching. In International Conference on Learning Representations. In ICLR, 2020. [21] G. H. Kim et al. Demodice: Offline imitation learning with supplementary imperfect demonstrations. In ICLR, 2022. [22] Z. Zhu et al. Off-Policy Imitation Learning from Observations. In NeurIPS, 2020. [23] X. B. Peng et al. Advantage-Weighted Regression: Simple and Scalable Off-Policy Reinforcement Learning. In ArXiv, 2019. [24] R. Kiryo et al. Positive-unlabeled learning with non-negative risk estimator. In NIPS, 2017. Pdf: /pdf/b7dfc1ba79174e540ba2dbb9db6b9a27123f3378.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: This paper proposes a simple weighted BC algorithm for offline RL with missing data. The authors first study that DICE family of algorithms suffer from missing data due to inaccurate value estimation or sparsity of observations which causes undesired monotonicity. The authors propose training a reward model from PU data and use MC value estimation with BC to train a policy. Experimental results on offline RL benchmarks show that this method is more robust to missing data and outperforms previous DICE family of methods. Strengths: The paper discusses limitations of DICE family of models in a practical way. The proposed method is also practical in a real setting. Results show robustness and overall improvement in missing/noisy data regime. Weaknesses: - The paper studies DICE family of algorithms with noisy data but the proposed method is mainly independent of these findings. It is a new objective without any value function or duality, which feels like it is not focusing on circumventing issues with DICE. - Model based methods are missing from the comparison. - The proposed method is similar to [1] in which the authors use exponential of advantage with BC for batch RL setting. While they don’t study missing data setting, I believe the objective is more general and can be applied to yours as well. A detailed comparison would be helpful. - Which one is more crucial: learning a better reward using PU data or using discounted future rewards for the value estimate? What is the performance if you used one-level PU learning without safe examples or two-level PU learning with 1-step reward? - Eq (6) doesn’t handle missing data explicitly. What happens if some $i+k,k>0$ is missing from the trajectory? Do you just ignore that from the summation? - I would have expected that with more noise, DICE methods perform worse. But in Figure-2, SMODICE-KL performs well Walker2d_1/3 and worse with Halfcheetah_1/5 while for others it is the opposite. Could you please explain why? [1] Exponentially Weighted Imitation Learning for Batched Historical Data. Qing Wang, Jiechao Xiong, Lei Han, peng sun, Han Liu, Tong Zhang. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Please see above for specific questions. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: The authors study some limitations but a discussion on whether progress on D4RL reflects progress on real life would be helpful. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks a lot for the valuable feedback. ### Q1. The paper studies DICE with noisy data, but the proposed method is not focusing on circumventing issues with DICE methods. Though our method looks very different, our motivation is to solve the issues in DICE methods as simply as possible. Below is a summary of our motivations: **1. We want to adopt the reward design of SMODICE [1] and LobsDICE [2], as they are empirically effective.** However, with expert segments in the task-agnostic dataset, the reward design with normal binary classification objective sometimes fails. For this reason we improve by introducing Positive-Unlabeled (PU) learning. **2. We want to address the problem of SMODICE and LobsDICE that they require complete task-agnostic trajectories to work well.** We found theoretically (**Sec. C** in the appendix) that SMODICE and LobsDICE formulations are brittle if incomplete trajectories are present. We also observed this empirically as the value function update is often non-robust (which is also true for ORIL [3]). Based on this, we find that simply using the discounted sum of future exp(R(s)) is a good solution to address both points and improve on all testbeds of SMODICE. Thus, though the final methods are very different, our idea is in spirit an improvement over DICE methods. ### Q2. Comparison to model-based methods. Great suggestion. We additionally compare our method to two model-based offline RL methods MOReL [4] and MOPO [5]. Notably, MOReL is much slower to train, requiring 2 days as opposed to 4 hours of our approach on our machine (NVIDIA RTX 2080Ti). To improve efficiency, we compare our method to MOReL and MOPO which are provided with ground-truth reward labels, i.e., MOReL and MOPO have an advantage. The trajectory with the best return is provided as the task-specific dataset to our method. Despite being agnostic to reward labels, our method is still **marginally better than MOReL** (**74.3 vs. 72.9** reward averaged over 9 environments {halfcheetah, hopper, walker2d} * {medium, medium-replay, medium-expert}), and **much better than MOPO** (**74.3 vs. 42.1** average reward). Below is the detailed performance comparison: | Dataset | MOReL | MOPO | TAILO (ours) | |---|---|---|---| |halfcheetah-Medium(M)|42.1|**42.3**|39.8| |hopper-M|**95.4**|28|56.2| |walker2d-M|**77.8**|17.8|71.7| |halfcheetah-Medium-Replay(MR)|40.2|**53.1**|42.8| |hopper-MR|**93.6**|67.5|83.4| |walker2d-MR|49.8|39.0|**61.2**| |halfcheetah-Medium-Expert(ME)|53.3|63.3|**94.3**| |hopper-ME|108.7|23.7|**111.5**| |walker2d-ME|95.6|44.6|**108.2**| |average|72.9|42.1|**74.3**| ### Q3. Add comparison to MARWIL [7]. We additionally compare to MARWIL as requested, and we find that consistent with prior work [8], **MARWIL performs similar to plain BC and worse than our method.** See the pdf in the global response for reward curves. ### Q4. The effect of using PU learning vs. using future reward for value estimates, and more ablations on PU learning. Part of the answer can be found in the appendix **Sec. F.6**; we additionally conduct experiments for the remaining questions and summarize here (see the pdf of the global response for reward curves). **1. Using future rewards for value estimates is more crucial, but one-level PU without safe examples also leads to a performance drop in some environments.** The comparison between ours and ORIL variants in **Sec. F.6** shows that even with rewards from PU learning, RL often fails. Using 1-step PU learning without safe examples causes a significant performance drop in some environments, as Fig. 23 and 24 depict. **2. We additionally test the result with 2-step PU learning + 1-step reward and without PU learning.** The result shows that while performance does drop without PU learning, it is less important than using the discounted sum of future reward instead of 1-step reward (i.e., $\gamma=0$) as value estimation. ### Q5. About handling missing data in the task-agnostic dataset. Yes, we ignore missing data from the summation in Eq. 6. The idea is that the weighted sum does not need to be very accurate to work well; e.g., if every other step is missing, the result will be summing over every other future state with $\gamma’=\gamma^{0.5}$. Because $\gamma$ is close to $1$ in long-horizon environments, the result does not change much. ### Q6. The unintuitive behavior of SMODICE-KL. **1. Collapses of SMODICE-KL come from value function divergence.** As stated in **Sec. C.2.1** in the appendix, SMODICE-KL diverges with missing data in the middle of the task-agnostic trajectories. When this happens (e.g., halfcheetah_1/5, walker2d_1/5 in Fig. 2), SMODICE-KL collapses; otherwise, it performs decently well (see pdf in the global response for visualization). **2. The smoothing effect of the Neural Network (NN) mitigates the divergence, but the strength of the effect varies.** As the data distribution differs, the smoothing effect varies across datasets. Thus, the time it takes for divergence differs. **3. With more and uniform updates, the abnormality caused by NN smoothing disappears.** As shown in **Fig. 19**, when the batch size is larger, the updates on each data point are more frequent and uniform. With such a batch size, the trend that increased noise leads to worse performance on SMODICE-KL (thin blue line) is more obvious. ### Q7. Limitations on the gap between D4RL and real-life progress. Great advice. We will add the following discussion to the limitation section in the camera-ready version: A limitation of our work is that our experiments are based on simulated environments such as D4RL. Thus a gap between our work and real-life progress remains. While we are following the settings of many recent works, such as SMODICE [1], ReCOIL [9] and OTR [10], to bridge the gap using techniques such as sim2real in the robotics community [11] is another very important direction for future work. ### References See global response. --- Rebuttal Comment 1.1: Title: Thanks for Your Valuable Advice Comment: As the end of the author-reviewer discussion period approaches, we would like to thank you again for your valuable advice in the review. We hope our rebuttal has addressed your concerns and appreciate your feedback to our response. Please let us know if you have any further questions, and we are more than happy to clarify. --- Rebuttal Comment 1.2: Title: Thank you for your response! Comment: Thank you for the clarifications and additional experimental results. I increased my score. --- Reply to Comment 1.2.1: Title: Response Comment: Thanks for your appreciation of our response!
null
null
null
null
null
null
LD2: Scalable Heterophilous Graph Neural Network with Decoupled Embeddings
Accept (poster)
Summary: This paper aims to optimize the design of graph neural networks on large-scale heterophilous graphs. It proposes a new framework called LD2, which decouples the node feature embedding and topology embedding. By obtaining a low-dimensional adjacency embedding and long-distance feature embedding through pre-computing, it achieves a faster and better performance for the mini-batch training. Strengths: 1. The problem studied by this paper is interesting, The model efficiency is not well-studied on the large-scale heterophilous graphs and this research is meaningful to the community. 2. The paper is well-organized and clearly presented. The authors make a thorough time complexity analysis and give sound solutions. Each component of the model is explained with reasonable logic. 3. The experimental results look promising, with a clear advantage in the test accuracy and computational cost. Weaknesses: 1. The comparison between LD2 and other models is not fair. The core component of LD2 is its precomputation stage. However, from its code, this part is implemented in C++ while other GNNs are implemented in Python. The computation of $A^2$ and any operation involving it could be very expensive. Usually, $A^2$ could be $10$x or $100$x denser than $A$, and that is why most GNNs do not use it on large-scale graphs. Therefore, first, the time/space complexity for the precomputation of LD2 in Table 1 cannot reflect the actual computational cost (ignoring the difference between $A$ and $A^2$), and second, the time comparison between LD2 and other models is unfair due to the different programming languages. 2. Although LD2 is claimed to be a scalable heterophilous GNN, its main contribution (precomputation) is closer to node embedding methods. LD2 combines and simplifies some existing techniques, which makes its contribution limited. I think the authors should also compare the embedding obtained from precomputation with some node embedding methods like node2vec. I expect there will be some tradeoff, but the discussion for this part is totally missing. 3. The authors miss the baseline [1]. [1] Sunil Kumar Maurya, Xin Liu, and Tsuyoshi Murata. Simplifying approach to node classification in graph neural networks. Journal of Computational Science, pp. 101695, 2022. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. I do not quite understand the formula in line 270: $R^{(l+1)}(u) = R^{(l+1)}(u) + R^{(l)}(u)$, what does it mean? 2. It looks like Line 4 of algorithm 1 contains a typo (shall be the union rather than an intersection). 3. Why are the dataset statistics of snap-patents and wiki different from that in [1]? 4. Why is the batch size of LD2 larger than LINKX in most datasets? And why is the performance of full-batch LINKX (Table 7) different from that in [1]? I am not sure if the authors reproduce the correct results of LINKX. [1] Lim, D., F. Hohne, X. Li, et al. Large scale learning on non-homophilous graphs: New benchmarks and strong simple methods. In 34th Advances in Neural Information Processing Systems. 2021 Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: Yes, the authors carefully address them. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ## W1 There are two concerns regarding comparison fairness. We would like to address them separately. ### Time complexity of 2-hop propagation As elaborated in line 275 & 287, we *do not explicitly compute the $A^2$ matrix* in propagation. Instead, we perform the $A$ multiplication twice to the embedding matrix to realize the 2-hop propagation. For each such sparse-dense matrix multiplication, the sparse adjacency matrix $A$ contains $m$ non-zero entries, and the dense matrix is of shape $n\times F$. The time complexity for one 2-hop propagation is therefore $2mF$. When such propagation is performed $L_P$ times, the total complexity is bounded by $O(L_PmF)$ as we present in Table 1. The memory expense for storing variables is only $O(nF)$ for each channel. ### Difference in programming languages * We would like to draw the reviewer's attention to the fact that the efficiency of our model *stems from improvements in complexity*. Specifically, by employing decoupling technique, we remove the iterative $O(IL_PmF)$ complexity from training, which is the primary contribution to scalable heterophilous GNN in this paper. * We argue that *experimental comparisons remain meaningful even when the programming languages differ*. For example, comparisons can be established between `pokec` and `wiki` of similar $n$, while `wiki` has 10x more edges $m$. **Table 3** displays that the training time of baselines models increases substantially on `wiki`: LINKX is 6x slower than on `pokec`, and GCNJK-GS is 3.5x slower, mainly due to the scale of $m$. In contrast, LD2 exhibits no significant change in training time and only a 1.5x slowdown in precomputation, which validates the enhanced efficiency. These comparisons are among the models themselves and hence independent of programming languages. However, the results still underscore our model's scalability. * In addition, most baseline implementations in experiments also exploit various acceleration techniques, such as CUDA matrix multiplication, high-performance libraries, and parallel computation. We believe that the comprehensive comparison — considering both theoretical analysis and experimental evaluation, as presented in our paper — is generally convincing. ## W2 We would like to first clarify our contribution, then discuss the experiments on embedding methods. ### Contribution of LD2 model * Different from plain *node embedding methods*, our precomputation process is particularly designed for GNN. Common node embedding algorithms are based on the sole graph structure. In comparison, we propose the Long-distance Feature Embeddings, which encode both graph topology and node attribute in accordance with the GNN propagation scheme. * Compared with *existing GNN techniques*, our embeddings specifically fit the decoupled architecture. We propose three non-trivial embeddings with an end-to-end precomputation. As the per-model comparison in **Section B** of supplementary material suggests, our decoupled embeddings are novel and different from existing approaches. * For the *GNN community*, we target the scalability issue of heterophilous GNNs, which is not well-studied in the literature. We introduce the decoupling technique and achieve improved time and memory complexity. LD2 is therefore contributive as a scalable GNN solution as a whole. ### Experiments on node embedding We conduct additional experiments by replacing the $A^2$ adjacency spectral embedding (ASE) with node2vec in **Table IV** in PDF. The results indicate that the node2vec model can only achieve suboptimal accuracy on `squirrel` and `penn94`, while exceeds RAM limit on `genius`. The former performance can be explained by the local neighborhood representation of node2vec, which is less suitable for heterophily. The OOM error is associated with the $O(nd^2)$ space/time complexity of node2vec. We refer the reviewer to the response to reviewer ZDfg Q2 for a more detailed comparison. ## W3 We conduct additional experiments on FSGNN [23] as **Table III** in PDF. The full-batch performance complements those in Table 7 & 11. [23] only evaluates small-scale graphs. On larger datasets in our experiment, the model exceeds the GPU memory limit due to the storage of all $L_P$ embeddings. Such $O(L_P nF)$ memory complexity proves to be less scalable. ## Q1 The formula is extracted from line 6, Algorithm 1. Given that $LR=(I-A)R$, each node in propagation inherits the previous embedding to itself ($IR$), while updating its neighbors as $-AR$. In the formula $R^{(l)}(u)$ denotes the embedding of $u$ before the current propagation, and $R^{(l+1)}(u)$ records the embeddings that has been propagated to $u$ from other nodes in the current iteration of propagation. When the propagation is applied to $u$, $R^{(l+1)}(u)$ is increased by the value of $R^{(l)}(u)$, which is the meaning of the formula. ## Q2 We thank the reviewer for pointing out the typo. We will correct it in the revised version. ## Q3 We deduce that the difference in node and edge size is caused by our data processing scheme, which removes the isolated nodes in the graph. ## Q4 We directly utilize the code in [26] for reproducing LINKX. We reckon that the inconsistency may be caused by: * *Hyperparameter configuration*: As [26] does not provide the exact hyperparameters but only grid search scripts, configurations in our experiments are possibly different. Our exploration is reported in Table 5 in supplementary material. * *Batch size*: As described in Section C.4, we select batch size for each model to maximize GPU utilization, as we mainly focus on the scalability evaluation. LINKX is of relatively smaller batch size because of higher memory demands of model weights. * *Full-batch performance*: We inherit settings in Table 5 for full-batch evaluation to ensure the consistency of efficiency, which may not be the optimal settings for accuracy. Nonetheless, the comparison does not affect our main contribution of scalable minibatch GNN. --- Rebuttal Comment 1.1: Title: Thanks for the response Comment: I first want to thank the reviewer for addressing my concerns in the time complexity, the differences in programming languages, the clarification on the contributions and additional experiments. Regarding the formula in line 270 (Q1), I think it should be made clearer, as this is a math equation rather than a pseudo code, it is better to differentiate the `embeddings that has been propagated to u` and the embeddings with the residual connection, or use $\leftarrow$ to replace $=. For Q4, I carefully review the hyperparameters listed in Table 5. The problem is that the hidden size chosen for LINKX is too large (typically 8-64, at least for the dataset used in the original paper), which will lead to overfitting and degenerated performance. I think the authors could double check this part. Right now, I will first keep my score. --- Reply to Comment 1.1.1: Title: Response to Comments by Reviewer 4icY Comment: We sincerely thank the reviewer for the time and effort in reviewing our paper and rebuttal. > Regarding the formula in line 270 (Q1), I think it should be made clearer, as this is a math equation rather than a pseudo code, it is better to differentiate the embeddings that has been propagated to u and the embeddings with the residual connection, or use to replace $=. We thank the reviewer for the suggestion. We will improve the presentation in the revised version. > For Q4, I carefully review the hyperparameters listed in Table 5. The problem is that the hidden size chosen for LINKX is too large (typically 8-64, at least for the dataset used in the original paper), which will lead to overfitting and degenerated performance. I think the authors could double check this part. We have reviewed the LINKX implementation and our parameter search. The LINKX paper searches the hidden size parameter within the range [16, 32, 128, 256]. The range we used in main experiments is usually [16, 32, 128, 256, 512], with only one additional value $512$ as it is also commonly used in other models. Below we display the results of LINKX with different hidden size parameters on some representative datasets: | Dataset | | `genius` | | | | `twitch-gamers` | | | | `pokec` | | | | |:---:|---:|---:|---:|---:|---:|---:|---:|---:|---:|---:|---:|---:|---:| | Hidden | Acc | Train | Infer | RAM | Acc | Train | Infer | RAM | Acc | Train | Infer | RAM | | 16 | 82.34 | 1.84 | 0.011 | 5.26 | 63.46 | 1.19 | 0.017 | 5.57 | 68.33 | 9.68 | 0.083 | 7.14 | | 32 | 82.46 | 2.85 | 0.012 | 5.27 | 63.37 | 1.40 | 0.020 | 5.58 | 67.87 | 8.34 | 0.082 | 7.14 | | 128 | 82.51 | 2.06 | 0.020 | 5.35 | 63.66 | 2.91 | 0.055 | 5.56 | 67.22 | 13.41 | 0.175 | 7.17 | | 256 | 82.31 | 4.90 | 0.036 | 5.56 | 64.11 | 5.58 | 0.100 | 5.55 | 68.82 | 23.14 | 0.290 | 7.58 | | 512 | 82.54 | 7.07 | 0.073 | 5.96 | 64.44 |10.99 | 0.192 | 5.56 | | | | (OOM)| It can be observed that $hidden=256$ or $512$ usually achieves the best accuracy, benefited from the model capacity. For large hidden size values, the model speed increases nearly linearly. As elaborated in Section C.4, the target of our evaluation on baseline models is to provide comparable efficiency performance while maintaining their accuracy. Hence we think that our settings is reasonable in achieving high accuracy and controlling the model width.
Summary: This paper introduces a scalable decoupled model designed to address the challenges posed by large-scale heterophilous graph. The model consists of two main components. In the first component, recognizing the effectiveness of $A^2$ in heterophilious graph, the model precomputes a matrix, denoted as $P_A$ in Equation (2), which serves as a representation of graph structure. This matrix is computed using a randomized eigendecomposition technique. In the second component, the authors leverage multiple hop-based coefficients of propagation to address the issue of long-range dependency. These coefficients are designed to capture information from nodes at different distances in the graph, allowing the model to effectively capture dependencies that span across multiple hops. Finally, the two components are combined, and the resulting representation is passed through an MLP to generate the final node representations. The authors provide some theoretical support and conduct a complexity analysis to demonstrate the effectiveness and efficiency of the proposed model. The author also conducts extensive experiments to evaluate the performance of their model on heterophilious graphs with different scales. The model achieves outstanding performance compared to some popular baselines while minimizing the computational resources (GPU) required. Strengths: 1. The authors employ a technique to compress the graph structure $A^2$ into a low-rank format while preserving the principal components. This approach effectively reduces the computational cost associated with the training phase. 2. The authors conduct various experiments to investigate the coefficients of propagation and successfully identify the optimal approach for handling heterophilous graphs. 3. The theoretical time complexity of the model is excellent. The pre-computation phase is conducted once, and the training phase supports mini-batch processing. These properties guarantee the scalability of the model. 4. The performance of the model surpasses the majority of the existing results, thereby confirming its effectiveness. Weaknesses: 1. The contributions to address heterophily are limited as the involvement of the raw structure of a graph and multi-hop neighbors have already been proposed in LINKX and spectral GNNs. 2. The method employed to reduce the dimensionality of the adjacency matrix, randomized SVD, is a commonly used approach. However, its performance heavily relies on the number of iterative rounds. The authors appear to merge the computational cost of SVD associated with the propagation step, which could potentially limit the effectiveness of the algorithm. 3. The experiments conducted in this work exclude results from well-known homophilous datasets such as Cora, PubMed, and Ogbn-products. Since this model is specifically designed for heterophilous graphs, it should still maintain comparable results on homophilous datasets when compared to the mentioned baselines. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. What is the performance of the model on homophilous datasets? Is the performance comparable to some recent works? 2. Will the model be implemented ordinarily, particularly when confronted with extremely large graphs such as Ogbn-Papers100M and real-world data with extremely high dimensions of node embedding? 3. It is mentioned in the paper [1] that there are additional heterophilous datasets available. Therefore, conducting more comprehensive experiments on those datasets is necessary to strengthen the evaluation and analysis presented in this work. [1] Platonov O, Kuznedelev D, Diskin M, et al. A critical look at the evaluation of GNNs under heterophily: are we really making progress? ICLR, 2023. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: The Effectiveness of randomized SVD when the propagation steps are limited is doubted. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ## W1 We wish to highlight that our major contribution lies in proposing a GNN design that specifically targets the scalability issue under heterophily. * Our model achieves *improved time and memory complexity*. Compared to spectral GNNs, our design escapes the iterative and full-graph propagation. Compared to LINKX, our model removes the architectural dependence on graph size, thereby substantially enhancing scalability and minibatch capability. * We propose *novel and effective embeddings* with end-to-end precomputation. Different from spectral GNNs, our multi-hop calculation is in a one-time decoupled manner. Our adjacency embedding design specifically addresses the drawback of LINKX. As analyzed in line 192, it can effectively approximate the corresponding component in LINKX with significantly reduced computation cost. * Evaluations demonstrate LD2's *empirical performance on minibatching and scalability*. In comparison, the multi-hop representative MixHop encounters performance degradation brought by minibatching, while LINKX exhibits worse scalability such as RAM usage. A more detailed comparison with multi-hop GNNs and LINKX is offered in **Section B.2 & B.4** of supplementary material. ## W2 We would like to address the concern of adjacency embedding calculation in two folds: ### *Theoretical analysis* on convergence For a general power iteration decomposing a matrix, its stopping criteria are usually both maximal iteration and error tolerance [R5]. In our Algorithm 1, they correspond to the maximal hop $L_P$ and the push threshold $\delta_P$. Iterations required for convergence can be derived as in line 288. For a rough estimation, substituting our common setting $F=512$, $\delta_P=10^{-5}$, and $\lambda_{F+1}/\lambda_{F}\approx 10^{-1}$ can give an estimation of $O(10)$ iterations, which is of the same order with our propagation hop $L_P=20$. The analysis can be supported by the spectrum distribution in **Figure 4** in supplementary material. A large number of datasets exhibit a rapid drop of eigenvalues for a certain rank $F$. When setting the embedding feature dimension close to or slightly larger than the proper rank, the computation is able to achieve favorable convergence of the leading components. ### *Empirical effect* of propagation steps We demonstrate empirical evidence on the effectiveness of approximate adjacency embedding by investigating test accuracy of solely learning on $P_A$. The performance is reported as orange dashdotted lines in **Figure 3 & 6**. Figure 3(b) is a representative example on `pokec`. It can be observed that for $L_P = 2$ and $4$ the accuracy is suboptimal, while for $L_P > 8$ the performance generally converges. It is also noteworthy that during power iteration, leading components converge in fewer iterations. As the nature of neural networks tends to focus more on larger values in features, the leading components, representing low-frequency spectrum, can effectively embed structural information even when the entire matrix has not converged. More iterations are generally helpful in securing high-frequency information. [R5] Golub H., Loan V. Matrix Computations. pp 450-457. 2012. ## W3 & Q1 We conduct comprehensive evaluation on 5 well-known homophilous datasets `protein`, `ogbn-arxiv`, `yelp`, `reddit`, and `ogbn-products` (named as `amazon`) in **Table 8, 9, 12, and 13** in supplementary material. We also conduct additional experiments on `cora` and `pubmed` in **Table I** in PDF. We compare the performance with 16 models. We refer the reviewer to the above tables for detailed results. We conclude that LD2 is capable of attaining comparable efficacy and remarkable efficiency among homophilous datasets. Its minibatch performance generally surpasses both homophilous and non-homophilous competitors. Moreover, it demonstrates impressive scalability for learning time and memory, which is consistent with the main experiments. Notably, LD2 is the only model that does not exceed the memory limit on any datasets in full-batch settings. ## Q2 We conduct additional experiments on `ogbn-papers100m`, which is the largest dataset available. Results are shown in **Table II** in PDF. It is worth mentioning that all 15 GNN-based baselines occur out of memory error on such a large graph. Although the dataset is homophilous, our LD2 model achieves reasonable accuracy with efficient time and memory usage. With respect to embedding dimension, it can be inferred from **Table 1** that our model exhibits an $O(FF')$ complexity, where $F$ is raw node attribute dimension, and $F'$ is model hidden dimension. This is on par with other GNN models, that when the model architecture is fixed, this can be considered as a linear complexity with respect to $F$. Our experiments evaluate on datasets such as `penn94` with a large $F=4.8K$. Performance on these datasets with high embedding dimension satisfy our scalability analysis. ## Q3 We conduct additional experiments on `roman-empire`, `minesweeper`, `amazon-ratings`, and `tolokers` proposed in [42]. Results are shown in **Table I** in PDF. These graphs are relatively small, comprising fewer than $30K$ nodes, compared to the large-scale ones in our main experiments. As mentioned in [42], these heterophilous datasets emphasize effectiveness rather than scalability, hence the main paper does not include them in our initial evaluation. We observe that the efficiency and efficacy strengths of our LD2 model persist on these datasets. Specifically, the minibatch baselines undergo substantial performance degradation on `roman-empire` and `minesweeper` when compared to the full-batch results in [42]. We surmise that this is associated with the longer diameter and stronger non-local dependency of these two graphs. As we point out in Section 2, the exploitation of sampling strategy in these models results in information loss, which greatly hinders the performance on such heterophilous graphs. --- Rebuttal Comment 1.1: Title: Thank you for the response. Comment: Thank you for your convincing response and additional experiments. After reading the additional results, concerns about W1 and W3 are still not well addressed. The performance of the proposed model on homophilic datasets is limited, which maintains to be the shared challenge of those works concentrating on heterophily. For example, on ogbn-paper100M, the performance of LD2 is far inferior to SGC, whose test accuracy is 0.6329. Also, the superiority of LD2 over multi-hop GNNs is still unclear to me. As shown in [1], spectral GNNs are able to handle large graphs (even ogbn-paper100M), and the results seem to be better. To sum up, I really appreciate the efforts of the authors in addressing scalability. But I still believe the whole model is somehow restricted in the process of addressing heterophily. After consideration, I'd like to hold the current score. And I'm looking forward to more well-developed future work. [1] Guo Y, Wei Z. Graph Neural Networks with Learnable and Optimal Polynomial Bases. In ICML 2023. --- Reply to Comment 1.1.1: Title: Thanks for your comments Comment: We thank the reviewer for the response and constructive comments. We would like to make further clarifications on the contribution of our paper and the comparison with other models: > The performance of the proposed model on homophilic datasets is limited, which maintains to be the shared challenge of those works concentrating on heterophily. For example, on ogbn-paper100M, the performance of LD2 is far inferior to SGC, whose test accuracy is 0.6329. (1.1) Regarding the **performance on homophilous datasets**, as we stated, our contribution lies in proposing a GNN design that targets the scalability issue under heterophily. As a consequence, the channels and model are specifically designed for the heterophilous settings, which may not be directly suitable for homophilous graphs. In other words, our model does not intend to achieve state-of-the-art performance on homophilous datasets, but to mainly address a range of datasets under heterophily where *homophilous models usually fail to achieve good performance*. (1.2) In particular, regarding the **SGC performance on `ogbn-papers100m`**, as we know, the result presented in the OGB Leaderboard is conducted *fully on CPU* using more than 150GB RAM. In our trial of reproducing the experiment in our environment, the calculation is also too long to produce comparable results. We believe this aptly demonstrates the scalability of our model, which completes minibatch training on GPU (24GB) with 105GB RAM. > Also, the superiority of LD2 over multi-hop GNNs is still unclear to me. As shown in [1], spectral GNNs are able to handle large graphs (even ogbn-paper100M), and the results seem to be better. To sum up, I really appreciate the efforts of the authors in addressing scalability. But I still believe the whole model is somehow restricted in the process of addressing heterophily. (2.1) Regarding the **interpretation of LD2**, as we stated, LD2 is among the first models of introducing decoupling strategy to heterophilous settings. We also provide the equivalence of LD2 channels to channels in the spectral domain in Section A in the supplementary material. We think our model offers a scalable solution with simplified computation and better complexity, addressing the scalability issue of previous works. (2.2) Regarding the **comparison between LD2 and other GNNs**, we would like to highlight that *[R6] also employs the decoupling strategy and mini-batch training* on large-scale graphs as described in its Section 4.4. This exactly echoes our LD2 of utilizing these techniques for addressing the scalability issue. Our model is different from [R6] in that it possesses a series of specifically designed channels, while [R6] learns to acquire the spectral channels. The learning process of [R6] already demands a complexity no less than $O(L(m+n))$. As the code released by [R6] only includes full-batch implementation, we are not able to conduct further empirical evaluation on this model. [R6] Guo Y, Wei Z. Graph Neural Networks with Learnable and Optimal Polynomial Bases. In ICML 2023. \* We would like to note that [R6] is a concurrent work as it is accepted in April 24, which is less than 2 months before our submission.
Summary: This paper studies an important problem of graph learning on large-scale heterophily graphs. The paper presents a novel approach LD2 model, decouples the embedding process from the convolutional process, allowing for more efficient and scalable learning. LD2 learns graphs under heterophily, which is particularly useful for large-scale graphs. Extensive experiments demonstrate the effectiveness of the proposed method compared to existing baselines. Strengths: - The paper addresses an important problem on large-scale heterophily graphs, which is especially important for large-scale datasets. - This paper propose a scalable graph learning method LD2 model, which decouples the embedding process from the convolutional process, and allows for more efficient and scalable learning. - The scalability problem is well defined, and the theoretical comparisons with previous works is clearly clarified. - Sufficient experiments show that LD2 is capable of lightweight minibatch training on large-scale heterophilous graphs, with up to 15× speed improvement and efficient memory utilization, while maintaining comparable or better performance than the baselines. - Experiments on several benchmark datasets demonstrate the effectiveness of the LD2 model compared to existing baselines, and the results show that the LD2 model outperforms existing methods in terms of both accuracy and scalability. Weaknesses: - The paper could benefit from a more detailed explanation of the LD2 model. While the authors provide some high-level descriptions of the model, it would be helpful to have a more in-depth explanation of the underlying mechanisms. - It is difficult to reproduce the results without access to the code used in the experiments. While the authors provide some details on the experimental setup, it would be helpful to have access to the code to ensure that the results are reproducible. I am willing to increase the score for this paper if my major concern about the reproducible problem is addressed. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - Is it possible for the authors to make the code used in the experiments public? This would help ensure that the results are reproducible and would be a valuable resource for researchers interested in implementing the LD2 model. - Could the authors provide more detailed information on the underlying mechanisms of the LD2 model? - Could the LD2 model handle noisy or incomplete graph data? While the authors mention that the approach is robust to noise, it would be helpful to have a more detailed analysis of the model's performance under different levels of noise and incompleteness. Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Limitations and potential improvements are discussed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ## W2 & Q1 Yes. A preliminary version of the project code, example data, and reproducibility instructions has already been provided in **Section H** of the supplementary material. We have also sent the same link to the AC in a separate comment following the rebuttal policy. ## W1 & Q2 The proposed LD2 model is a two-stage model as illustrated in Figure 1. In other words, its precomputation and feature transformation are decoupled as separated stages, as expressed in Eq. (1). We elaborate on the underlying mechanisms of the two stages respectively as following. ### (1) Precomputation The precomputation stage takes the input of graph adjacency matrix $A$ and node attribute matrix $X$, the output comprises four embedding matrices of shape $n\times F$. The goal of precomputation is to retrieve useful information from graph structure and feature to form the embeddings, which are utilized in the subsequent feature transformation stage. The entire precomputation is described in Algorithm 1. Specifically, the computation involves propagations, i.e., multiplication by the sparse adjacency matrix $A$, along with subsequent operations according to different definitions, to acquire the four embeddings: * The adjacency embedding $P_A$ is computed by 2-hop power iteration, mainly multiplying $A$ twice to the initial embedding matrix in each iteration. After convergence, the embedding follows the expression $P_A = U |\Lambda|^{1/2}$ as described in line 173. * The first feature embedding is expressed by $P_{X,H} = \sum_{l=1}^{L} (I-A)^l X$ as in line 206. In this case, its precomputation is conducted by iteratively multiplying $(I-A)$ to the input matrix $X$ for $L$ times and summing up the results. * Similarly, the second feature embedding is $P_{X,L2} = \sum_{l=1}^{L} A^{2l} X$ as in line 207. To acquire the embedding, we multiply $A$ to the input matrix $X$ for $2L$ times with summation. * The last embedding is exactly the input attribute matrix $P_{X,0} = X$. ### (2) Feature transformation A neural network is applied to iteratively learn from the input embeddings. Firstly, an individual weight matrix is applied to each embedding matrix. Then the multiplication results are concatenated and fed into the remaining MLP layers. The model is trained by minimizing the loss associated with the classification task. We also offer an interpretation from the spectral perspective, elucidating the expressiveness of decoupled propagation and our embedding designs from the viewpoint of graph signaling in **Section A** of the supplementary material. ## Q3 We would like to address the question in two folds. ### (1) Interpretation of the model robustness In line 250 we claim that the model is robust to a certain level of noise introduced by propagation. The purpose of the statement is to support the utilization of approximate propagation, signifying that embeddings acquired by precomputation do not need to be precise, as they are further processed by the MLP model. Such property on robustness has been observed as an interpretation of GNN learning [41], wherein the learning objective is to recover a clean and smooth representation from the noisy input features. Models such as [29], [31], and [25] have already implemented approximate propagation within the context of homophilous GNN. In our model, the precision of approximate propagation, which is the only source of computational noise, is controlled by the push threshold $\delta_P$ as in Algorithm 1. As described in Section E.3 of the supplementary material, we set $\delta_P=1\times 10^{-5}$ for common experiments, which is relatively small compared to the scale of feature values standardized to Gaussian distribution $N(0,1)$. ### (2) Evaluation on noisy data We conduct additional experiments to evaluate the model performance under different levels of noise and incompleteness. The results are shown in **Table V** in the appended PDF file. We mainly consider three types of noises, which are analyzed respectively as following: * *Push threshold*: We vary the threshold $\delta_P$ in Algorithm 1 to control the precision of propagation. A larger $\delta_P$ implies less precise propagation ignoring small feature values, while $\delta_P=10^{-5}$ is the original setting. It can be seen that by improving the precision, the final learning accuracy does not change significantly. It indicates that our setting of $\delta_P=10^{-5}$ is sufficient for propagation and does not affect the learning performance. * *Edge removal*: We randomly remove a percentage of edges to generate an incomplete variant of the graph. The LD2 model is then applied to learn on the incomplete graph. The removal causes a negative impact on the accuracy. However, as the node attributes $X$ are kept unchanged under the noise, the model is still able to achieve a reasonable performance. * *Attribute noise*: We apply Gaussian noise with standard deviations proportional to the deviation of each feature dimension to the raw node attribute matrix $X$ before precomputation. This is more aggressive as the noise level is much larger than the scale of propagation precision.Consequently, the model suffers a more significant accuracy reduction. However, as the noise level increases, the model's performance converges towards the performance achieved by only learning the adjacency embedding $P_A$ (whose performance is reported as orange dashdotted lines in Figure 3 in the paper and Figure 6 in the supplementary material). This is because the adjacency information is unaffected under such kind of noise. Overall, we elaborate on the effectiveness of performing approximate propagation in LD2. The additional experiments also demonstrate the robustness of our model benefiting from learning both adjacency and feature information. --- Rebuttal Comment 1.1: Title: Thanks for the response. Comment: Thanks for the authors' hard work and attention to my feedback. The response has addressed my concerns. Therefore, I would like to raise my score. --- Reply to Comment 1.1.1: Comment: We deeply appreciate the reviewer's insightful feedback and for the favorable reconsideration of our score.
Summary: The paper proposes a new graph neural network (GNN) model called LD2, which specifically targets learning on heterophilous graphs, where connected nodes have different labels. The authors argue that existing models for heterophilous graphs often require iterative full-graph computations, which can be computationally expensive and difficult to scale to larger graphs. In contrast, LD2 decouples graph propagation and generates expressive embeddings prior to training, resulting in a scalable and efficient model with optimal time complexity and a memory footprint that remains independent of the graph scale. Strengths: 1. The paper studies the scalability issues of heterophilous GNN and propose a scalable model, LD2 which simplifies the learning process by decoupling graph propagation and generating expressive embeddings prior to training. 2. Theoretical analysis demonstrates that LD2 achieves optimal time complexity in training, as well as a memory footprint that remains independent of the graph scale. 3. Extensive experiments to showcase that the proposed model is capable of lightweight minibatch training on large-scale heterophilous graphs, with up to 15× speed improvement and efficient memory utilization, while maintaining comparable or better performance than the baselines. 4. The paper is well-written and easy to follow. Weaknesses: See questions. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. It seems that the method has to precompute the embeddings for adajcencies, e.g. eigendecomposition in Section 3.2, is the model able to tackle the inductive settings? 2. Can other methods for encoding structures act as adajcency embeddings? for example, the spatial encodings in Graphormer. 3. In line 160, it says "Particularly, the most informative aspects are often associated with 2-hop neighbors", is this statement verified in some paper? The method design includes the 2-hop neighborhood. What about consider other hops of neighborhood? 4. Can this method apply to heterogeneous graphs? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ## Q1 Yes. Our proposed model LD2 is capable of handling inductive tasks. This can be implemented by conducting precomputation respectively on the training and inference graphs. After that, the feature transformation model can be easily trained on the precomputed embeddings of the training graph and then perform inductive inference based on the embeddings of the other graph. We conduct additional experiments on the inductive datasets `protein-inductive` and `yelp-inductive` to evaluate the model capability. Results are shown in **Table II** in PDF. As a brief summary, LD2 achieves comparable accuracy with GCN-GS, while PPRGo fails to adapt such settings. Regarding efficiency, our model is 10-50x faster in training, which is in line with our complexity analysis as well as evaluation on minibatch homophilous baselines presented in **Table 12** in supplementary material. ### Notes on *inductive datasets* To the best of our knowledge, there is no existing inductive dataset specifically for heterophilous node classification. Hence we employ `protein-inductive` and `yelp-inductive` datasets, which are homophilous graphs. We follow the settings in [27] in Table II. The transductive versions of these two datasets are `protein` and `yelp`, which have already been evaluated in **Table 8-9** in supplementary material. ### Notes on *inductive GNN models* To the best of our knowledge, existing GNNs proposed for heterophilous graphs scarcely mention inductive capability or provide relevant implementations. Notably, some heterophilous models like LINKX contain structures determined by the input graph shape, preventing their application to inductive settings. In contrast, our model structure is independent of the graph size and is applicable to the task as described above. This is also the reason that Table II only includes comparisons with homophilous baselines GCN-GS [27] and PPRGo [29] that possess inductive implementations. ## Q2 We conduct additional experiments on the performance of replacing the $A^2$ adjacency spectral embedding (ASE) with other types of structural embeddings in **Table IV** in PDF. Specifically, "SPD" denotes the shortest path distance used as spatial encodings in Graphormer [R1]. The results indicate that by using SPD, the model can only achieve suboptimal accuracy on `squirrel`, while exceeds time limit on `penn94` and `genius`. We explain this by the complexity of SPD. In the implementation of [R1], Floyd-Warshall algorithm is used, which has time complexity $O(n^3)$ and space complexity $O(n^2)$. According to Section 2 of our paper, this is not scalable to large graphs. In fact, in [R1] the approach is applied to graph regression on `PCQM4Mv2`, where the average graph size is only $n=14$ [R2]. Hence when applied to our datasets such as `penn94` and `genius`, the SPD calculation will be expensive. We compare the advantages of our proposed ASE($A^2$) with other structural embeddings such as SPD from three aspects: * *Effectiveness*: As we analyzed in Section 3.2, ASE is able to capture structural information especially on the homophilous components of the 2-hop graph, while SPD is more specific to encode distance information of directly connected nodes, which is less suitable for heterophilous embedding. * *Memory efficiency*: ASE is a low-dimensional embedding of shape $n\times F$, where the feature dimension $F$ is generally much smaller than the graph scale. This implies better scalability compared to SPD embedding which is a dense $n\times n$ matrix. * *Time efficiency*: ASE also benefits from faster computation in our model as described in Section 3.4, which can be computed along with feature embeddings with a complexity linear to edge size $m$. [R1] Do Transformers Really Perform Bad for Graph Representation? NeurIPS 2021. [R2] OGB-LSC: A large-scale challenge for machine learning on graphs. 2021. ## Q3 Yes, as stated in line 161, [13] proposes to exploit 2-hop information for heterophilous GNNs, and proves in its Theorem 2 that the 2-hop neighborhood is expected to be homophily-dominant even under heterophily. A recent paper [R3] also verifies that the 2-hop similarity is strongly relevant to GNN performance. To empirically showcase the effectiveness of $A^2$, we also conduct additional experiments on the embeddings generated from other hops of neighborhood in **Table IV** in PDF. Specifically, we apply the same rank-$F$ approximation in Eq. (2) by replacing $A^2$ with $A$ and $A^3$, which are respectively denoted as ASE($A$) and ASE($A^3$). It can be inferred that, on datasets where adjacency embedding $P_A$ is important such as `squirrel`, changing ASE to other hops significantly reduces the accuracy. On `penn94` and `genius`, the accuracy with 1- or 3-hop adjacency embedding is not better than solely learning on feature embeddings $P_X$ (whose performance is reported as pink lines in Figure 3 & 6 in supplementary material). In addition, when the number of hops increases, the convergence of the decomposition in Algorithm 1 becomes slower, which leads to longer precomputation time. It is thence a theoretically and empirically reasonable choice to employ 2-hop neighborhood for adjacency embedding. [R3] 2-hop Neighbor Class Similarity (2NCS): A graph structural metric indicative of graph neural network performance. AAAI Workshop 2023. ## Q4 Our claimed contribution of this paper primarily centers on proposing a scalable GNN for heterophilous graphs. Therefore, the design for heterogeneous graphs is not the focus of this work. However, we do recognize that approaches exist for transferring GNN models on homogeneous graphs to accommodate heterogeneous information. For instance, [R4] proposes learning distinct weights for each individual relation type as a "relational" model. We believe this can be a potential direction for future exploration. [R4] Modeling relational data with graph convolutional networks. European Semantic Web Conference 2018. --- Rebuttal Comment 1.1: Title: Thank you for the response. Comment: The authors have addressed my concerns, I would like to raise my score. --- Reply to Comment 1.1.1: Comment: We sincerely thank the reviewer for the time and effort invested in re-evaluating our work. Your constructive feedback is invaluable to us.
Rebuttal 1: Rebuttal: We sincerely appreciate the valuable feedback and insightful comments from all our reviewers, and are delighted to see that our effort toward improving the scalability of heterophilous GNNs is acknowledged by the reviewers. We have exerted substantial effort to investigate and address all the issues raised. Below is a summary highlighting our major updates: 1. *New datasets*: We conduct additional experiments on *7 new datasets* in total. **Table I** in the appended PDF file displays the results on homophilous `cora` and `pubmed` as mentioned by reviewer aZqt Q1, as well as heterophilous `roman-empire`, `minesweeper`, `amazon-ratings`, and `tolokers` as suggested in Q3. These findings complement those in Table 1 and 2 in main paper and Table 6, 8, 10, 12 in the supplementary material. 2. *New settings*: We extend evaluations on inductive datasets `protein-inductive` and `yelp-inductive` as asked by reviewer ZDfg Q1. We also incorporate learning on the extremely large graph `ogbn-papers100m` in reviewer aZqt Q2. The results are shown in **Table II** in the appended PDF file. 3. *New baseline*: In **Table III** in the appended PDF file, we introduce FSGNN as required by reviewer 4icY W3. This supplements the full-batch evaluation in Tables 7 and 11 in the supplementary material. 4. *Performance of adjacency embedding schemes*: We explore alternative methods including node2vec (reviewer 4icY W2), spatial encoding (reviewer ZDfg Q2), and adjacency propagation in different hops (reviewer ZDfg Q3). The results on learning accuracy and computation efficiency are displayed in **Table IV** in the appended PDF file. 5. *Robustness against noise*: We investigate the model performance under noise and incompleteness as per reviewer K3oj's Q3. **Table V** in the appended PDF file evaluates different types and levels of noise including approximate propagation, edge removal, and attribute noise. 6. *Contribution and comparison*: In this paper, we propose the LD2 model that specifically targets the scalability issue under heterophily. We justify our novel contribution relative to other approaches and particularly address the comments from reviewer aZqt W1 and reviewer 4icY W2. We also feature the comparison of embedding designs in response to reviewer ZDfg Q2 and reviewer 4icY W2. 7. *Details in model design*: We provide additional details and explanations on the model design including the underlying mechanisms (reviewer K3oj W1 & Q2), convergence of adjacency embedding (reviewer aZqt W2), precomputation performance (reviewer 4icY W1), and implementation details (reviewer 4icY Q1-4). \* Reference numbers are the same with those in the main paper by default. Pdf: /pdf/9d7c55290b7b179913e099a72623fe05af650ebc.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null